Tiny Core Linux
Tiny Core Base => TCB Talk => Topic started by: curaga on September 16, 2014, 01:34:49 AM
-
Again it's time to post if you'd like to see any changes in the official kernel config. This only affects x86 and x64, the ARM ports will do their own thing.
Currently there are no config changes planned.
We're targeting 3.16 currently.
This thread will remain open for two weeks.
-
hi curaga,
would you please be so kind and have a look especially for a stable support of suspend to ram?
in the 4.x branch it worked flawless, 5.x offered just a black screen after the wake-up-process.
for the reason that i use the different flavors of tinycore extensively with long uptimes this powermanagement-feature is essentially for me.
otherwise some boxes get really hot and waste so much wasted energy :(
thank you for your help and commitment.
-
Suspend is really hw-dependant, so I don't think there's anything we can do there. If you find it works better in another kernel version, you can always build a custom kernel.
-
Suspend is really hw-dependant...
yes, you are right.
the kernel in the actual 5.x-tinycore-branch has the reputation that it makes often problems with suspend to ram...
so perhaps suspend to ram could be one of many criteria when you decide the final version and configuration in the core team?
-
could some tune-up be done for the drm & radeon in the source code?
my udev rule file (from archlinux) is not working with a remaster;
/etc/udev/rules.d/30-radeon-pm.rules
KERNEL=="dri/card0", SUBSYSTEM=="drm", DRIVERS=="radeon", ATTR{device/power_method}="profile", ATTR{device/power_profile}="low"
So I need at every few tens minutes to issue:
sudo sh -c "echo low > /sys/class/drm/card0/device/power_profile"
sudo sh -c "echo mem > /sys/power/state"
to stop the GPU fan, which drives me crazy...
-
The Radeon module has much better power management in current kernels (DPM). Perhaps you won't need to use the profile at all then.
-
According to
http://wiki.gentoo.org/wiki/Zram#Enabling_zram
there is an apparent advantage of building zram as module.
-
Making zram a module would cater to a rather limited scenario, changing the number of such devices at runtime without a reboot. I don't think requiring a reboot there harms much, especially with Core's usual boot speeds ;)
Meanwhile, having it a module would slow down everyone's boots by a small amount.
-
Making zram a module would cater to a rather limited scenario, changing the number of such devices at runtime without a reboot.
What I had in mind:
1. Possibility of setting up apropriate number of zram swap devices at boot time according to CPU cores detected - plenty of such scripts are already floating around on the net.
2. Possibility to create arbitrary filesystems on zram devices ad-hoc at any moment in runtime; for various purposes like e.g. testing, benchmarking, comparison (either of different filesystems or of different version of same filesystem) - and thus avoiding need for ramdisks which allocate a fixed amount of memory, as opposed to dynamical allocation of zram devices.
All of the above seems to depend on zram being built as module.
-
If you boot with the option zram.num_devices=16, you'll have enough free ones for every cpu plus benching?
-
Certainly that would provide enough flexibility for the biggest majority of scenarios.
A question which arises though is if e.g. one would remaster with zram.num_devices=16 and then use that CD on a lowend machine if and how much waste of resources would result from overhead of e.g. 15 redundant zram devices.
I was so far unable to find documentation about that.
What I did find is:
Note:
There is little point creating a zram of greater than twice the size of memory
since we expect a 2:1 compression ratio. Note that zram uses about 0.1% of the
size of the disk when not in use so a huge zram is wasteful.
Not sure if from overhead issue of size there could be any conclusion regarding issue of number of devices.
-
25% * 256 mb * 0.1% * 16 = ~ 1mb
Anyway, it's been two weeks, so closing up this thread.