General TC > Tiny Core on Virtual Machines
Setting up Qemu
nick65go:
In my naive knowledge, and until someone will prove the contrary (with complete command line example),
the TCDV (Tiny Core Disk Virtual) boot code will not work in the scenario with "qemu -kernel aaa -initrd bbb -drive file=ccc,if=virtio, ..."
Please correct the statements were I am wrong below.Thanks.
1. Qemu boots the kernel (given on its command line)
2. Kernel sees initrd (a ramfs type-temp memory, with no blocks) and decompress it in ram (still no device-blocks aware)
3. Kernel searches, in few hard-coded places, to find the starting init command (or init=/blah if given as a kernel boot parameter)
if init=/bin/sh then the shell needs ONLY /dev/console, optionaly /dev/null (I tested this)
4. Kernel will automatically populate a lot of devices, in 3 cases: mount devpts, mdev or udev
with "mount devpts" kernel mounts, over original /dev, with its new default devices :) (like 46 of tty# etc)
in tinycore case /dev/ were already populated, and devpts is not used to mount over them
5. Kernel now has mountpoints /dev/*, even without them being listed in /etc/fstab, neither mounted as /etc/mtab
6. Kernel gives execution further to init, and supervise in the background
7. init can only runs commands or scripts from RAM (uncompressed rootfs.gz) and any mounted (if any) devices (block, nfs etc)
8. So initial /TEST/sda1.qcow2 (placed on host machine) will only expose its "content" to kernel and its rootfs system, if was on qemu command line
9. Kernel does not see original files sda1.qcow2 and rootfs.gz anymore
10. the TCVD=abc.img is useless now, because "inside /TEST/sda1.qcow2" is no abc.img to mount
Summary: TCVD is not for qemu + virtio. Q.E.D.
Rich:
Hi nick65go
--- Quote from: roberts on April 26, 2009, 02:23:53 PM ---Virtual disk support (tcvd, tiny core virtual disk) is new in Tiny Core v1.4
This was primarily setup for Qemu support, but is useful for anyone.
It uses the Qemu virtual disk image ext2, a loopback file.
To use first specify the 'harddisk' file with the tcvd boot option followed
by your regular boot optons using tcvd, examples:
boot: tinycore waitusb=10 tcvd=harddisk.img tce=tcvd restore=tcvd
This will autoscan for the file named harddisk.img in level one directories and
setup mount capabilities as device tcvd. The subsequent tce=tcvd and restore=tcvd
will use the virtual drive special device.
boot: tinycore waitusb=10 tcvd=sda1/harddisk.img tce=tcvd restore=tcvd
This will look for the loopback file named harddisk on sda1 and setup a special
device tcvd used by the other boot opitons.
boot: tinycore waitusb=10 tcvd=sda1/qemu/harddisk.img tce=tcvd restore=tcvd
This is/was the typical Qemu setup that I used in Damn Small Linux.
Again a special device tcvd will be setup and then other boot options are able
to use it.
--- End quote ---
nick65go:
Hi Rich,
For clarity, I do not have a "harddisk.img" (neither inside the root "/" [level 1 directory] , nor in other disk / partitions) INSIDE my container (named XXX.qcow2) which is used "after"/simultaneouslyI bootthis GUEST-machine (under qemu) by kernel + core.gz
My container is file, no partitions on it, was ext2-type formatted, and inside the container I have only a /tce folder populated with *tcz etc. So qemu boots from a container. The kernel and core.gz are not in the container.
The idea to have a TCVD boot code (in T.1.4) was based on the assumption that the kernel could see the HOST-machine root files(its /) and HOST-machine's partitions (/sda1), which is NOT the case with my qemu simulation as I explained. In my case, kernel does not see any image "harddisk.img" to mount in its environment.
PS: The only small bug here is that (by default) an instruction like "mount /dev/vda" was not automatically issued; which I could remaster myself.
Rich:
Hi nick65go
Having never used Qemu, I decided to install it on one of my machines
and play with it.
------- Boot ISO file, desktop auto loads from cde directory, additional extensions auto load from tce directory. -------
First I created an empty formatted image file. I mounted the file and created
a tce/optional directory structure and installed mc.tcz.
I also downloaded:
http://tinycorelinux.net/10.x/x86/release/TinyCore-10.1.iso
Then I ran:
--- Code: ---qemu-system-i386 -enable-kvm -hda Apps.img -m 1G -cdrom TinyCore-10.1.iso -boot d
--- End code ---
The syslinux boot loader displayed, I hit enter and a few moments later was presented with
a desktop. The wbar had an mc icon, showing that it found and loaded mc.tcz from my
Apps.img file which was mounted as /mnt/sda. The Apps utility found the tce directory
and was able to install extensions.
--------------------- Boot vmlinuz and core.gz, all extensions auto load from tce directory. ---------------------
First I created an empty formatted image file. I mounted the file and created
a tce/optional directory structure and installed Xvesa.tcz flwm_topside.tcz wbar.tcz aterm.tcz mc.tcz.
I also downloaded:
http://tinycorelinux.net/10.x/x86/release/TinyCore-10.1.iso
Then:
--- Code: ---tc@E310:~/Qemu$ mkdir iso
tc@E310:~/Qemu$ sudo mount TinyCore-10.1.iso iso
mount: /home/tc/Qemu/iso: WARNING: device write-protected, mounted read-only.
# Copy vmlinuz and core.gz to current directory:
tc@E310:~/Qemu$ cp iso/boot/* .
cp: -r not specified; omitting directory 'iso/boot/isolinux'
--- End code ---
And launch Qemu:
--- Code: ---qemu-system-i386 -enable-kvm -hda Apps.img -m 1G -kernel vmlinuz -initrd core.gz "quiet"
--- End code ---
As before, I was presented with a desktop and wbar had an mc icon.
The 2 attached files list all of the commands used to set up these tests.
The FetchExt.sh command is a script that can be found attached to this post:
https://forum.tinycorelinux.net/index.php/topic,23034.msg164745.html#msg164745
nick65go:
Hi Rich, thank you for info, feed-back and perseverance!
I hope I will not annoy you with my remarks. I am truly for collaboration. So, may I repeat myself again?
1. using old simulated hardware TC works! I mean if using sintax "qemu -hda xxx.img -cdrom yyy.iso ". In this case a PATA/SATA/AHCI is simulated for hdd and SCSI for cdrom, and motherboard in PXII etc. All is OK, as I said in my previous posts. The mount points are /dev/sdx for HDD and /dev/sr# for CDROM.
2.
--- Code: ---qemu-system-i386 -m 128M -kernel vmlinuz -initrd MyCore.gz -drive file=MyDVD.iso,if=virtio,media=cdrom,readonly=on -drive file=Mydisk.qcow2,if=virtio,media=disks
--- End code ---
Using new VIRTUAL devices [ the main part in code: ,if=virtio,] virtio-blk for HDD, and virtio-scsi for CDROM, will NOT work automatically. The mount points are like /dev/vda for HDD and /dev/vdb for CDROM, depending if HDD is first and cdrom is second in controllers like virtio-pci, PS: When I say virtio all devices, is like I do not use an implicit old-simulation intel E1000 network card, but instead used a virtio-nic etc. The speed is 2x up to 10x time faster -- in theory. It is no point in qemu to simulate all low-level real hardware, instead qemu could simulates (by-pass / simplify internal manufactured logic circuits) to just correct in/out data flow of the virtual device.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version