Tiny Core Linux
General TC => Tiny Core on Virtual Machines => Topic started by: nick65go on February 20, 2023, 03:05:01 AM
-
FYI: I saw that (in both x86 and x86_64 versions) the exfat.ko.gz is unusually placed in drivers, instead of fs (file system) branch, ex: /modules.gz\modules\lib\modules\6.1.2-tinycore\kernel\drivers\scsi\exfat\.
In qemu (ver 3.1) booting TC14 x86_64 with VIRTIO options, both cdrom and hda, the /cde is not used so Xfbdev is not automatically started, because /dev/vda# is not mounted.
qemu-system-x86_64 -accel kvm -machine q35 -cpu max -m 128M -vga virtio -kernel vmlinuz64 -initrd MyCore.gz -drive file=MyDVD.iso,if=virtio,media=cdrom,readonly=on,index=1 -drive file=Mydisk.qcow2,if=virtio,media=disk,index=2 -boot c -append "vga=ask nozram noswap root=/ init=/init"
Maybe I missed some initial driver parameters? because after booting I can mount cdrom from /mnt/vda and hdd-disk from /dev/vdb1. OR from /dev/sr0 an /dev/sda1 if I did not use virtio simulation.
EDIT: maybe is my mistake, /cde folder can be only on booting device, but not on another device.
-
Hi nick65go
... EDIT: maybe is my mistake, /cde folder can be only on booting device, but not on another device.
I don't think that is true.
tc-config calls tce-setup to load extensions.
tce-setup contains the following:
# Finally check for CD Extensions if requested
if [ "$CDE" ]; then
# Some cd drives are slow - if cde was requested, wait for udev to settle
[ ! -s /etc/sysconfig/cdroms ] && udevadm settle --timeout 5
if [ -s /etc/sysconfig/cdroms ]; then
for DEV in `cat /etc/sysconfig/cdroms`; do
process_CD
done
fi
fi
# If nothing loaded then also check for pseudo CD, e.g., isohybrid
if [ "$CDE" -a -z "$CDELIST" ]; then
sleep 5
DEV="$(autoscan 'cde' 'd')"
process_CD
fi
If it doesn't find any CDs, then it scans the rest of the system for a device
containing a /cde directory.
In order for that to take place, you need to pass the cde boot code. I don't
see cde anywhere in your qemu command.
-
In order for that to take place, you need to pass the cde boot code. I don't see cde anywhere in your qemu command.
Rich, you spot it! Thank you.
PS: I am in the process to rename the (internal) tc functions and subroutines, replacing abc() with _abc(),
same for tc variables, so iset var=blah become set tc_var=blah; and is also clear (for me) if is a function or a variable, etc.
so is self-documented then, and I "know" that is not an external command from /bin, /usr/bin (and busybox/gnu_tools, toybox). In the process I involuntary "destroyed" some scripts.
-
For clarity, to do not confuse the audience, I am back to virgin files (vmlinuz, rootfs.gz, modules.gz, *.tcz), so Xvesa for now.
[BUT it is the same small "problem" as with Xfbdev running in both in x86 and x86_64];
I concatenated rootfs.gz + modules.gz into MyCore,gz (because I can not have multiple -initrd parameters in qemu :( ).
I created sda1.qcow2 file, from linux I formatted as ext2 and mounted [so it has NO PARTITIONS] and I populated its /tce/optional folder with virgin *.tcz and *tcz.dep, and I have /tce/onboot.lst etc.
case 1: booting with "legacy" -hda is working, Xvesa shows up automatically.
qemu-system-i386.exe -machine q35 -cpu max -m 128M -vga virtio -kernel vmlinuz -initrd Mycore.gz -hda sda1.qcow2
case 2: booting with "Virtio" vda is NOT working by default, only manually.
qemu-system-i386.exe -machine q35 -cpu max -m 128M -vga virtio -kernel vmlinuz -initrd Mycore.gz -drive file=sda1.qcow2,if=virtio,media=disk
In cat /etc/fstab I have the mount point /mnt/vda mapped to /dev/vda, OK. But the mount command shows that /dev/vda is not mounted yet. So of course tczs are not loaded, and Xvesa is not shown.
I tried even with individual -append cases, like tce=vda, tce=/mnt/vda, tce=/dev/vda etc. No success. :(
The bug: the virgin scripts are focused MAINLY on strings like sda* but not vda*. This happens with VIRTIO and with sda.qcow2 partitioned or not.
-
As for virtio disks, it's quite possible some scripts are missing vda* cases.
Right! The culprit section may be in /etc/init.d/tc-config ? with booting variable tcvd=something;
but I am rusty with shell variable truncating... I have no success with it like tcvd=/dev/vda/tce.
Today is near no point to simulate old-pseudo-real devices, when Qemu (yes, 3.1 :) ) is so advanced in Virtio everything (blocks, devices, controllers, network-cards, files-share, video, cpu, etc). A trimmed-down kernel with just virtio drivers, eventually built-in, (and possible without spectre mitigations) will be nice to have.
PS: it is just for fun, to remind me the flow: boot-loader -> kernel -> shell (+ libs) -> udev -> mounts -> login.
-
Hi nick65go
... PS: it is just for fun, to remind me the flow: boot-loader -> kernel -> shell (+ libs) -> udev -> mounts -> login.
This appears to be accurate:
https://forum.tinycorelinux.net/index.php/topic,22020.0.html
-
Hi nick65go
If you run this:
autoscan-devices
does it find your virtual disk?
-
autoscan-devices
does it find your virtual disk?
Yes, result is "vda"
-
Wow, I think I found the "wrong" instruction; is in the /etc/init,d/tc-functions, at lines 91 and 106:
MOUNTPOINT="$(grep -i ^$D2\ /etc/mtab | awk '{print $2}' | head -n 1)"
I think that is should be $D2, without "\"; or to be $D2\s; [where \s means space in regex].
I can execute codeD2=/dev/vda ABC="$(grep -i ^$D2 /etc/fstab)" ; echo $ABC
resulting in ABC="/dev/vda" from /etc/fstab, or resulting ABC="" from /etc/mtab.
But withD2=/dev/vda ABC="$(grep -i ^$D2\ /etc/fstab)" ; echo $ABC
I stay in a loop forever.
-
Hi nick65go
... I think that is should be $D2, without "\"; or to be $D2\s; [where \s means space in regex]. ...
No, it is correct as written. The "\ " tells grep to include a trailing space
as part of the search term. If you look at:
^$D2\ /etc
123456789
there are 2 spaces between \ and / in positions 6 and 7.
The first space is part of the search term and the second
separates search term from the file to be searched.
As near as I can tell, find_mountpoint works correctly:
tc@E310:~$ for DEVICE in `autoscan-devices`; do find_mountpoint $DEVICE; echo $MOUNTPOINT; done
/mnt/sda2
/mnt/sda7
/mnt/sda3
/mnt/sda1
/mnt/sda6
/mnt/sdb1
tc@E310:~$
Rename your /tce directory to /cde and add the cde boot code.
The cde directory must be in the root of the drive.
Remove any tce boot codes.
It should be able to find the /cde directory on its own.
-
Hi Rich, thanks for the feed-back.
For clarification, lets forget about "cde" boot code/parameter.
Lets focus only on /tce folder in "disk" /dev/vda, in partition "vda" (not vda1, because I have no partition) with boot code tcvd=/dev/vda (or tcvd=vda/whatever, etc)
#Check for Virtual Hard Drive
if [ -n "$TCVD" ]; then
wait $fstab_pid
TCVD="${TCVD#/}"
TCVD="${TCVD#dev/}"
TCVD_DEVICE="${TCVD%%/*}"
TCVD_LOOPFILE="${TCVD#*/}"
if [ "$TCVD_DEVICE" == "$TCVD_LOOPFILE" ]; then
TCVD_DEVICE="$(tc_autoscan $TCVD_LOOPFILE 'f')"
fi
PARTITION="${TCVD_DEVICE##/dev/}"
find_mountpoint "$PARTITION"
if [ -n "$MOUNTPOINT" ]; then
[ "$MOUNTED" == "yes" ] || /bin/mount "$MOUNTPOINT"
usleep_progress
if [ -f "$MOUNTPOINT"/"$TCVD_LOOPFILE" ]; then
[ -d /mnt/tcvd ] || /bin/mkdir /mnt/tcvd
ln -sf "$MOUNTPOINT"/"$TCVD_LOOPFILE" /dev/tcvd
printf "/dev/tcvd \t/mnt/tcvd \text2\tloop\t0 0 #Added by TC\n" >> /etc/fstab
sync
fi
fi
I started with TCVD=/dev/vda, then TCVD_DEVICE=vda and TCVD_LOOPFILE=vda;
then there I had a problem with $MOUNTPOINT==" ";
and the if [ -n "$MOUNTPOINT" ]; then bypass all. Hm..
I was running each instruction step by step, but last time it was something like inside find_mountpoint "$PARTITION" it struggled with the grep - i ^$D2, when $D2 was /dev/vda.
PS: if you have time, could you do the same, run step by step, to reproduce my findings for $D2=/dev/vda in that part of code, to see what I mean?
-
"find_mountpoint vda" works correctly
. /etc/init.d/tc-functions; MOUNTED="fake"; find_mountpoint vda ; echo $MOUNTPOINT $ MOUNTED
will return internal MOUNTPOINT=/mnt/vda and internal MOUNTED=no (or my external MOUNTED=fake)
#Check for Virtual Hard Drive
if [ -n "$TCVD" ]; then # TCVD=/dev/vda/tce/abc
wait $fstab_pid
TCVD="${TCVD#/}" # TCVD=dev/vda/tce/abc
TCVD="${TCVD#dev/}" # TCVD=vda/tce/abc
TCVD_DEVICE="${TCVD%%/*}" # TCVD_DEVICE=vda
TCVD_LOOPFILE="${TCVD#*/}" # TCVD_LOOPFILE=vda
if [ "$TCVD_DEVICE" == "$TCVD_LOOPFILE" ]; then # yes
TCVD_DEVICE="$(tc_autoscan $TCVD_LOOPFILE 'f')" # TCVD_DEVICE="$(tc_autoscan vda 'f')" -> TCVD_DEVICE="" !?
PARTITION="${TCVD_DEVICE##/dev/}" # PARTITION=""
find_mountpoint "$PARTITION" # find_mountpoint "" -> MOUNTPOINT=""
if [ -n "$MOUNTPOINT" ]; then
...
fi
fi
Stop here now, the problem is with assignment (tc_autoscan vda 'f') to TCVD_DEVICE
-
Hi nick65go
How to use/share Virtual Disks (Qemu) ?:
http://tinycorelinux.net/faq.html#qemu
-
Booting with kernel + initramfs then I have only filesystems in RAM, and no access to any *.qcow2 FROM INSIDE qemu. This is by design, to have access to ONLY what I want from inside qemu, and no undesired links to host files (except self-defined network or file-shares if/when in need) so no stupid program call home to spy, or access personal files -- in theory.
Maybe I need to define myself a new boot case for boot code like "virtio".So the TCVD is for an image (qcow2, img, etc) on a disk FROM where TC booted, in my case from void :)
But as you have all my booting parameters in my previous posts, please let me know if you have success with actual (not remastered) tinycore.As I said before, I can remaster with a trimmed down kernel, all virtual devices, one user, a small init script -- to define mount points, variable (language, path, lib-configs, etc) -- in less than 4kb (like toybox). Basically I need a shell and udev. But I am interested to re-discover what is new in tc14 in 2023.All is for fun (and learning -- I forget things the older I get), as there are a lot of ready-made distros to fully "install" in 10 minutes.
-
Hi nick65go
The tcvd boot code is looking for an image file.
Assuming your file containing /tce is on sda1 and is called qcow2 , try adding this:
-append tcvd=sda1/qcow2 tce=tcvd restore=tcvd
-
In my naive knowledge, and until someone will prove the contrary (with complete command line example),
the TCDV (Tiny Core Disk Virtual) boot code will not work in the scenario with "qemu -kernel aaa -initrd bbb -drive file=ccc,if=virtio, ..."
Please correct the statements were I am wrong below.Thanks.
1. Qemu boots the kernel (given on its command line)
2. Kernel sees initrd (a ramfs type-temp memory, with no blocks) and decompress it in ram (still no device-blocks aware)
3. Kernel searches, in few hard-coded places, to find the starting init command (or init=/blah if given as a kernel boot parameter)
if init=/bin/sh then the shell needs ONLY /dev/console, optionaly /dev/null (I tested this)
4. Kernel will automatically populate a lot of devices, in 3 cases: mount devpts, mdev or udev
with "mount devpts" kernel mounts, over original /dev, with its new default devices :) (like 46 of tty# etc)
in tinycore case /dev/ were already populated, and devpts is not used to mount over them
5. Kernel now has mountpoints /dev/*, even without them being listed in /etc/fstab, neither mounted as /etc/mtab
6. Kernel gives execution further to init, and supervise in the background
7. init can only runs commands or scripts from RAM (uncompressed rootfs.gz) and any mounted (if any) devices (block, nfs etc)
8. So initial /TEST/sda1.qcow2 (placed on host machine) will only expose its "content" to kernel and its rootfs system, if was on qemu command line
9. Kernel does not see original files sda1.qcow2 and rootfs.gz anymore
10. the TCVD=abc.img is useless now, because "inside /TEST/sda1.qcow2" is no abc.img to mount
Summary: TCVD is not for qemu + virtio. Q.E.D.
-
Hi nick65go
Virtual disk support (tcvd, tiny core virtual disk) is new in Tiny Core v1.4
This was primarily setup for Qemu support, but is useful for anyone.
It uses the Qemu virtual disk image ext2, a loopback file.
To use first specify the 'harddisk' file with the tcvd boot option followed
by your regular boot optons using tcvd, examples:
boot: tinycore waitusb=10 tcvd=harddisk.img tce=tcvd restore=tcvd
This will autoscan for the file named harddisk.img in level one directories and
setup mount capabilities as device tcvd. The subsequent tce=tcvd and restore=tcvd
will use the virtual drive special device.
boot: tinycore waitusb=10 tcvd=sda1/harddisk.img tce=tcvd restore=tcvd
This will look for the loopback file named harddisk on sda1 and setup a special
device tcvd used by the other boot opitons.
boot: tinycore waitusb=10 tcvd=sda1/qemu/harddisk.img tce=tcvd restore=tcvd
This is/was the typical Qemu setup that I used in Damn Small Linux.
Again a special device tcvd will be setup and then other boot options are able
to use it.
-
Hi Rich,
For clarity, I do not have a "harddisk.img" (neither inside the root "/" [level 1 directory] , nor in other disk / partitions) INSIDE my container (named XXX.qcow2) which is used "after"/simultaneouslyI bootthis GUEST-machine (under qemu) by kernel + core.gz
My container is file, no partitions on it, was ext2-type formatted, and inside the container I have only a /tce folder populated with *tcz etc. So qemu boots from a container. The kernel and core.gz are not in the container.
The idea to have a TCVD boot code (in T.1.4) was based on the assumption that the kernel could see the HOST-machine root files(its /) and HOST-machine's partitions (/sda1), which is NOT the case with my qemu simulation as I explained. In my case, kernel does not see any image "harddisk.img" to mount in its environment.
PS: The only small bug here is that (by default) an instruction like "mount /dev/vda" was not automatically issued; which I could remaster myself.
-
Hi nick65go
Having never used Qemu, I decided to install it on one of my machines
and play with it.
------- Boot ISO file, desktop auto loads from cde directory, additional extensions auto load from tce directory. -------
First I created an empty formatted image file. I mounted the file and created
a tce/optional directory structure and installed mc.tcz.
I also downloaded:
http://tinycorelinux.net/10.x/x86/release/TinyCore-10.1.iso
Then I ran:
qemu-system-i386 -enable-kvm -hda Apps.img -m 1G -cdrom TinyCore-10.1.iso -boot d
The syslinux boot loader displayed, I hit enter and a few moments later was presented with
a desktop. The wbar had an mc icon, showing that it found and loaded mc.tcz from my
Apps.img file which was mounted as /mnt/sda. The Apps utility found the tce directory
and was able to install extensions.
--------------------- Boot vmlinuz and core.gz, all extensions auto load from tce directory. ---------------------
First I created an empty formatted image file. I mounted the file and created
a tce/optional directory structure and installed Xvesa.tcz flwm_topside.tcz wbar.tcz aterm.tcz mc.tcz.
I also downloaded:
http://tinycorelinux.net/10.x/x86/release/TinyCore-10.1.iso
Then:
tc@E310:~/Qemu$ mkdir iso
tc@E310:~/Qemu$ sudo mount TinyCore-10.1.iso iso
mount: /home/tc/Qemu/iso: WARNING: device write-protected, mounted read-only.
# Copy vmlinuz and core.gz to current directory:
tc@E310:~/Qemu$ cp iso/boot/* .
cp: -r not specified; omitting directory 'iso/boot/isolinux'
And launch Qemu:
qemu-system-i386 -enable-kvm -hda Apps.img -m 1G -kernel vmlinuz -initrd core.gz "quiet"
As before, I was presented with a desktop and wbar had an mc icon.
The 2 attached files list all of the commands used to set up these tests.
The FetchExt.sh command is a script that can be found attached to this post:
https://forum.tinycorelinux.net/index.php/topic,23034.msg164745.html#msg164745
-
Hi Rich, thank you for info, feed-back and perseverance!
I hope I will not annoy you with my remarks. I am truly for collaboration. So, may I repeat myself again?
1. using old simulated hardware TC works! I mean if using sintax "qemu -hda xxx.img -cdrom yyy.iso ". In this case a PATA/SATA/AHCI is simulated for hdd and SCSI for cdrom, and motherboard in PXII etc. All is OK, as I said in my previous posts. The mount points are /dev/sdx for HDD and /dev/sr# for CDROM.
2. qemu-system-i386 -m 128M -kernel vmlinuz -initrd MyCore.gz -drive file=MyDVD.iso,if=virtio,media=cdrom,readonly=on -drive file=Mydisk.qcow2,if=virtio,media=disks
Using new VIRTUAL devices [ the main part in code: ,if=virtio,] virtio-blk for HDD, and virtio-scsi for CDROM, will NOT work automatically. The mount points are like /dev/vda for HDD and /dev/vdb for CDROM, depending if HDD is first and cdrom is second in controllers like virtio-pci, PS: When I say virtio all devices, is like I do not use an implicit old-simulation intel E1000 network card, but instead used a virtio-nic etc. The speed is 2x up to 10x time faster -- in theory. It is no point in qemu to simulate all low-level real hardware, instead qemu could simulates (by-pass / simplify internal manufactured logic circuits) to just correct in/out data flow of the virtual device.
-
Do you know if there are the correct modules for loading virtio instead of the e1000e mod in your iso file.
I think the modules are in some separate file, and you have to extract the correct ones or even compile sometimes.
-
@patrikg: Yes, I check-it, all the "legacy" modules are in modules.gz (or they are build-in), so that the old syntax is working.
Also I check-it, and all necessary "virtio-xxx" are in modules.gz, For each virtio-blk/scsi/pci/net I check-it that all dependencies are present, by using "modinfo -F dependence xxx.ko.gz".
To summarize: I can boot in a shell /bin/sh, logged as tc. Then AFTER I MANUALY mount /dev/vda /mnt/vda, (and make the proper link to /etc/system/tcedir) then I can load manually each *.tcz and voila, Xvesa shows-up. Nothing is missing as files. Just that the virtual disk is not auto-mount, so I initially stay in RAM, with only kernel and core.gz loaded (like I would issue boot-code = base norestore).
PS: I do not have an ISO file. I use from TC14: vmlinuz + (rootfs.gz + modules.gz => core.gz) and manually downloaded Xvesa.tcz + Xprogs.tcz + all their dependencies into a ext2-formated no-partition xxx.qcow2 file,
-
@Rich, to convince yourself that both your syntax and my syntax for qemu are correct you can combine together.
1. Just make a xxx.img file with dd with 1MB size; mkfs.ext2 format it -- it will have no partition by default; mount it somewhere; cd into mounted loop; and touch a.txt (to have a file inside it for identification; unmount it.
2. use your normal qemu syntax plus mine, like this (to attach also this xxx.img virtual disk):qemu-system-i386 -enable-kvm -m 128M -cdrom TinyCore-10.1.iso -boot d -drive file=xxx,img,if=virtio,media=disk
FYI: my qemu 3.1 needs "-accel kvm", instead of "-enable-kvm"
Inside TC do "mount /dev/vda /mnt/vda"; and "ls /mnt/vda" will see the a.txt file
This will prove the correct qemu syntax was used, plus that all kernel modules are available;
see with lsmod what is loaded -- virtio-xxx, the legacy modules are mostly built-in and will not show for TC14.
The info about the syntax I learned from internet for qemu 7.2 documentation
-
Hi nick65go
The diskfile.img contains a tce directory with the following installed onboot:
Xvesa.tcz, flwm_topside.tcz, wbar.tcz, aterm.tcz, mc.tcz, and grabber.tcz.
This detects vda , mounts it, loads all extensions, and displays a desktop:
qemu-system-i386 -enable-kvm -m 1G -kernel vmlinuz -initrd core.gz -append "quiet waitusb=2" -drive file=diskfile.img,if=virtio,media=disk,format=raw
Remember, Tinycore only automounts drives with tce, persistent home, or persistent opt directories.
-
Hi Rich, thank you for your testing.
I will re-do my test cases again, because now for me is not working auto-mounting /dev/vda.
I used qemu 5.1.0 in win10, and qemu 3.1 in Tinycore 14, with vmlinuz + core,gz from TC13 and TC14.
-
New tests were done in win10 with qemu 5.1, with an EMPTY ext2 [raw image] abc.img 10 MB, and vmlinuz + core.gz from TC13.
case1: in abc.img there is NOT a /tce folder;
A: qemu without -append tce=sda" will not mount /mnt/sda
B: qemu with -append tce=sda" will mount /mnt/sda AND will populate it with folders /tce/optional /tce/ondemand and file tce/onboot.lst. OK!
case2: in abc.img WITH an empty /tce folder;
A: qemu without -append tce=sda/tce" will mount /mnt/sda. OK!
Summary: (/tce inside abc.img) OR "qemu -append tce=/sda" to mount /dev/sda and to see /mnt/sda/tce
case3: in abc.img WITHOUT a /tce folder;
qemu-system-i386.exe -m 128M -kernel vmlinuz -initrd core.gz \
-drive file=abc.img,if=virtio,media=disk
A: qemu without -append tce=..." will not mount /mnt/vda, same as case 1.A
B: qemu with -append tce=vda" will NOT mount /mnt/vda. !?
C. case 3.B plus "-append waitusb=3". Finaly /dev/vda is mounted, and I am like in case 1.B, with new auto-created folders /tce/optional /tce/ondemand and file tce/onboot.lst
Summary: for virtio, "qemu -append tce=vda waitusb=3", it does not matter with/wihout folder /tce inside abc.img, to mount /dev/vda and to see /mnt/vda/tce.
Rich, thank you! You highlighted two main things (I missed waitusb=):
1. tce= must be on -append line, to auto-create folder "/tce" IF it did not existed.
2. waitusb=3 must be on -append line, to wait for virtio disk to show-up
PS: it is a race condition in qemu, the kvm emulation for a very fast CPU (running tc-config) versus the slow read of the abc.img from the host disk.
-
Oh, a slow disk, wouldn't have expected that when SATA disks don't need the wait. Use the label/uuid additions to waitusb to have a fast boot.
-
@curaga: Thanks!
in win10 I have both a fast CPU (Intel i5-8350U), and a fast SSD (solid state disk). But unfortunately NO acceleration (I am not admin, and no acceleration drivers can be installed); qemu using tcg translator x86 qemu-cpu into x86_64 real-CPU, so is still faster than I/O reading a file from SSD disk.
In linux I have a slow CPU (AMD APU A6-6300) but KVM acceleration; CPU is still faster than I/O reading a file from rotational SATA disk.
For files from TC13, waitusb=3 was enough; but for files from TC14 I need waitusb=5+.
So, yes, unbelievable, I did not expected these in qemu inner logic, and I did not used waitusb=; my mistake, sorry.
-
Hi nick65go
Use this for fastest response:
tc@box:~/Qemu$ blkid -s UUID diskfile.img
diskfile.img: UUID="f0fb31eb-8b59-4091-a381-93bf508a83e7"
tc@box:~/Qemu$
tc@box:~/Qemu$ qemu-system-i386 -enable-kvm -m 1G -kernel vmlinuz -initrd core.gz -append "quiet waitusb=20:UUID=f0fb31eb-8b59-4091-a381-93bf508a83e7" -drive file=diskfile.img,if=virtio,media=disk,format=raw
tc-config checks for the presence of that UUID 4 times per second and continues as soon as it finds it.
-
@Rich: Thanks! Because I could change any /both of LABEL or UUID, then:
1. Which is faster: waitusb=seconds:LABEL=TCVM or waitusb=seconds:UUID=12345678-aaaa-bbbb-cccc-12346789012 ?
2. What difference / reduction (in seconds) should we expect? from your experience: like [0.5 ... 1.0] seconds? or less than 2 seconds?
3. Could I use a short and personalized UUID,, like UUID=1234 ? Or is any mandatory template-format with 32 characters?
-
@Rich, I found the answer myself, by try and error:
from tc I see: /dev/vda: LABEL="QCOW2" UUID="69b2bd7f-5108-43b9-94d8-da4800b53851" BLOCK_SIZE="1024" TYPE="ext2"
case 1: waitusb=10:LABEL=QCOW2 => countdown stop at 7, so it starts in 3 seconds
case 2: waitusb=10:UUID=69b2bd7f-5108-43b9-94d8-da4800b53851 => countdown stop at 7, so it starts in 3 seconds
if all upercase UUID (as I see in 7zip) it still successfully finished, but:
case3 : waitusb=10:UUID=69B2BD7F-5108-43B9-94D8-DA4800B53851, countdown stop at zero.
so blkid UUID is case sensitive, prone to error if manually input; so LABEL= will be.
So it is the same speed 3 seconds (before staring to load tcz), as with simple waitusb=3; Maybe qemu has a bug.
-
Hi nick65go
... so blkid UUID is case sensitive, prone to error if manually input; so LABEL= will be. ...
That's why I ran blkid on the image file directly and then copy/pasted the result.
... So it is the same speed 3 seconds (before staring to load tcz), as with simple waitusb=3; Maybe qemu has a bug.
Or it could have been 2.25 seconds. tc-config uses integer math to compute the
countdown you see and the result displayed can be off by about 1 second.
There is no downside to using waitusb=seconds:LABEL=TCVM and setting seconds
to a large number like 30.
-
idk how relevant this might be , just ftr ;D
I concatenated rootfs.gz + modules.gz into MyCore,gz (because I can not have multiple -initrd parameters in qemu :( ).
https://bugs.launchpad.net/qemu/+bug/393569/comments/6
Looking through old bug tickets... If I've got that right, there is multiboot support in QEMU nowadays, and we also have the possibility to load multiple files with the "loader" device ... is that enough, or is still something to be done here?
digging around abit i an example using "-device loader"
https://support.xilinx.com/s/question/0D54U00005VTWwTSAX/using-an-initramfs-with-qemu-does-not-run-uboot-is-this-intentional-also-what-is-linuxbootelf?language=en_US
as yet untested ???
-
@mocore: thanks!, when/if/after you tested it could you please copy/paste here the full command line :)
in the mean time I will waste my hdd space with concatenation (in 3 seconds).
PS: I am interested also in qemu 3.0 test (it has less dependencies) as it does a nice (& sufficient for me) job for small virtual-I/O kernel devices.
-
Hi
Are any of you guys running qemu on host RPi piCore64 14.x?