I have a problem with my Bash script to split all files from directory into groups where each group size is 1GB.
I have a script that looks like this:
#!/bin/bash
path=$1
unset i
echo $path start
fpath=`pwd`"/files"
find "$path" -type f>files
max=`wc -l $fpath | awk '{printf $1}'`
while read file; do
files[i]=$file
size[i]=$(du -s $file | awk '{printf $1}')
((i++))
echo -ne $i/$max'\r'
done < `pwd`"/files"
echo -ne '\n'
echo 'sizes and filenames done'
unset weight index groupid
for item in ${!files[*]}; do
weight=$((weight+${size[$item]}))
group[index]=${files[$item]}
((index++))
if [ $weight -gt "$((2**30))" ]; then
((groupid++))
for filename in "${group[@]}"
do
echo $filename
done >euenv.part"$groupid"
unset group index weight
fi
done
((groupid++))
for filename in "${group[@]}"
do
echo $filename
done >euenv.part"$groupid"
echo 'done'
It works, but it is very slow. Can anyone help me and give me some advice how to make it faster? Thanks