General TC > Remasters / Remixes - Unofficial
[Solved] Instructions on updating TinyCore 11 to Real Time?
curaga:
So it sounds like the system isn't even trying to start X, but at least Xvesa was available. Look into ~/.profile, see which condition fails.
pditty:
How can I tell by looking at the .profile? I don't see Xserver or text in my /etc/sysconfig directory, but should I?
Screenshot of my .profile is attached
pditty:
I don't know if this helps, but I noticed that if I create an etc/sysconfig/Xserver file and put the string 'Xvesa' in it i can then run startx and it brings me to the TC blue screen (no icons) and the mouse doesn't work. But it looks like its trying to do something.
pditty:
Also, if I turn on syslog and compare my realtime build to the Tinycore-current.iso build this is what each of them look like.
Tinycore-current.iso
Apr 20 18:55:56 box authpriv.notic sudo: tc : TTY=unknown ; PWD=/mnt/sr0/cde/optional ; USER=root ; COMMAND=/usr/local/bin/umount -d /mnt/test
Apr 20 18:55:56 box authpriv.notic sudo: root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/sh
Whereas on my real time build i get this..
Apr 20 19:05:10 box authpriv.notic sudo: tc : TTY=unknown ; PWD=/mnt/sr0/cde/optional ; USER=root ; COMMAND=/usr/local/bin/umount -d /mnt/test
Apr 20 19:05:10 box daemon.info init: starting pid 812, tty '/dev/tty1': '/sbin/getty -nl /sbin/autologin 38400 tty1'
I don't ever see the real time version log in as root.
xor:
:P
Are you sure you are using the right patch!
https://mirrors.edge.kernel.org/pub/linux/kernel/projects/rt/
https://mirrors.edge.kernel.org/pub/linux/kernel/projects/rt/5.4/older/
patch-5.4.3-rt1.patch.gz 17-Dec-2019 21:06 207K
patch-5.4.3-rt1.patch.sign 17-Dec-2019 21:06 438
patch-5.4.3-rt1.patch.xz 17-Dec-2019 21:06 170K
--- Quote ---+Performance
+~~~~~~~~~~~
+Some basic tests were performed on a quad Intel(R) Xeon(R) CPU E5-2697 v4 at
+2.30GHz (36 cores / 72 threads). All tests involved writing a total of
+32,000,000 records at an average of 33 bytes each. Each writer was pinned to
+its own CPU and would write as fast as it could until a total of 32,000,000
+records were written. All tests involved 2 readers that were both pinned
+together to another CPU. Each reader would read as fast as it could and track
+how many of the 32,000,000 records it could read. All tests used a ring buffer
+of 16KB in size, which holds around 350 records (header + data for each
+entry).
+
+The only difference between the tests is the number of writers (and thus also
+the number of records per writer). As more writers are added, the time to
+write a record increases. This is because data pointers, modified via cmpxchg,
+and global data access in general become more contended.
+
+1 writer
+^^^^^^^^
+ runtime: 0m 18s
+ reader1: 16219900/32000000 (50%) records
+ reader2: 16141582/32000000 (50%) records
+
+2 writers
+^^^^^^^^^
+ runtime: 0m 32s
+ reader1: 16327957/32000000 (51%) records
+ reader2: 16313988/32000000 (50%) records
+
+4 writers
+^^^^^^^^^
+ runtime: 0m 42s
+ reader1: 16421642/32000000 (51%) records
+ reader2: 16417224/32000000 (51%) records
+
+8 writers
+^^^^^^^^^
+ runtime: 0m 43s
+ reader1: 16418300/32000000 (51%) records
+ reader2: 16432222/32000000 (51%) records
+
+16 writers
+^^^^^^^^^^
+ runtime: 0m 54s
+ reader1: 16539189/32000000 (51%) records
+ reader2: 16542711/32000000 (51%) records
+
+32 writers
+^^^^^^^^^^
+ runtime: 1m 13s
+ reader1: 16731808/32000000 (52%) records
+ reader2: 16735119/32000000 (52%) records
+
+Comments
+^^^^^^^^
+It is particularly interesting to compare/contrast the 1-writer and 32-writer
+tests. Despite the writing of the 32,000,000 records taking over 4 times
+longer, the readers (which perform no cmpxchg) were still unable to keep up.
+This shows that the memory contention between the increasing number of CPUs
+also has a dramatic effect on readers.
+
+It should also be noted that in all cases each reader was able to read >=50%
+of the records. This means that a single reader would have been able to keep
+up with the writer(s) in all cases, becoming slightly easier as more writers
+are added. This was the purpose of pinning 2 readers to 1 CPU: to observe how
+maximum reader performance changes.
--- End quote ---
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version