Hi guys.
Throughout my time with Linux and all the hundreds of tinycore USB thumb drives I've made, I have to say I've never used DD as it has never been proven to be MLC Flash friendly which includes USB thumb drives, SSD's and SD cards. By "friendly" I'm referring to negatively impacting performance by incorrect usage of erase, partition offset and handling of empty space.
Maybe it's not justified, I know DD has been around for a long time from the days of archaic HDD's. Frankly I'm not sure if it has ever seen a Nand friendly update.
In addition, While I've never used DD I constantly read of DD having the potential to terminate the process without flushing the write cache which is somewhat disconcerting.. Maybe someone can clarify?
I prefer to "erase" the drive safely if previously used (writing 1111's to the drive is safe, not programming each page with zero's as is common practice with HDD's), partition and format as required. Mount both ISO & USB partitions, then use "rsync" or plain old simple and reliable "cp". Lastly, install the desired boot-loader. This method has never failed.
This caught my eye:
sudo dd if=/$1 | pv -s $DrvSize | of=$2 bs=64K
"bs=64K" This is purely a read/write performance option right, so why enforce a limit of only 64K? Is there a PC today that can't handle 4M for example?
The most performance limiting sizes here are available RAM and possible Flash defaults which are: 1. page size = 4K and 2. Erase Block size = 128K
64K matches none of those.. Maybe I missed something, but am just wondering what are the benefits and where this came from?
This script seems really interesting. Am still reviewing the script to see how and if it needs adapting for TC use
https://github.com/jsamr/bootiso