Skip to content

Commit 1d8f625

Browse files
authored
ZTS: Remove ashift setting from dedup_quota test (#17250)
The test writes 1M of 1KB blocks, which may produce up to 1GB of dirty data. On top of that ashift=12 likely produces additional 4GB of ZIO buffers during sync process. On top of that we likely need some page cache since the pool reside on files. And finally we need to cache the DDT. Not surprising that the test regularly ends up in OOMs, possibly depending on TXG size variations. Also replace fio with pretty strange parameter set with a set of dd writes and TXG commits, just as we neeed here. While here, remove compression. It has nothing to do here, but waste CI CPU time. Signed-off-by: Alexander Motin <[email protected]> Sponsored by: iXsystems, Inc. Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Tony Hutter <[email protected]>
1 parent 8d14897 commit 1d8f625

File tree

1 file changed

+10
-11
lines changed

1 file changed

+10
-11
lines changed

tests/zfs-tests/tests/functional/dedup/dedup_quota.ksh

+10-11
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ function do_setup
7979
{
8080
log_must truncate -s 5G $VDEV_GENERAL
8181
# Use 'xattr=sa' to prevent selinux xattrs influencing our accounting
82-
log_must zpool create -o ashift=12 -f -O xattr=sa -m $MOUNTDIR $POOL $VDEV_GENERAL
82+
log_must zpool create -f -O xattr=sa -m $MOUNTDIR $POOL $VDEV_GENERAL
8383
log_must zfs set compression=off dedup=on $POOL
8484
}
8585

@@ -189,31 +189,30 @@ function ddt_dedup_vdev_limit
189189
# add a dedicated dedup/special VDEV and enable an automatic quota
190190
if (( RANDOM % 2 == 0 )) ; then
191191
class="special"
192+
size="200M"
192193
else
193194
class="dedup"
195+
size="100M"
194196
fi
195-
log_must truncate -s 200M $VDEV_DEDUP
197+
log_must truncate -s $size $VDEV_DEDUP
196198
log_must zpool add $POOL $class $VDEV_DEDUP
197199
log_must zpool set dedup_table_quota=auto $POOL
198200

199201
log_must zfs set recordsize=1K $POOL
200-
log_must zfs set compression=zstd $POOL
201202

202203
# Generate a working set to fill up the dedup/special allocation class
203-
log_must fio --directory=$MOUNTDIR --name=dedup-filler-1 \
204-
--rw=read --bs=1m --numjobs=2 --iodepth=8 \
205-
--size=512M --end_fsync=1 --ioengine=posixaio --runtime=1 \
206-
--group_reporting --fallocate=none --output-format=terse \
207-
--dedupe_percentage=0
208-
log_must sync_pool $POOL
204+
for i in {0..63}; do
205+
log_must dd if=/dev/urandom of=$MOUNTDIR/file${i} bs=1M count=16
206+
log_must sync_pool $POOL
207+
done
209208

210209
zpool status -D $POOL
211210
zpool list -v $POOL
212211
echo DDT size $(dedup_table_size), with $(ddt_entries) entries
213212

214213
#
215-
# With no DDT quota in place, the above workload will produce over
216-
# 800,000 entries by using space in the normal class. With a quota, it
214+
# With no DDT quota in place, the above workload will produce up to
215+
# 1M of entries by using space in the normal class. With a quota, it
217216
# should be well under 500,000. However, logged entries are hard to
218217
# account for because they can appear on both logs, and can also
219218
# represent an eventual removal. This isn't easily visible from

0 commit comments

Comments
 (0)