Tiny Core Base > TCB Bugs

printf in /init, it has missing "f"

<< < (3/4) > >>

nick65go:
here is what i understand from kernel docs.
https://www.kernel.org/doc/html/latest/filesystems/vfs.html?highlight=inode

"Inodes are filesystem objects such as regular files, directories, FIFOs and other beasts. They live ...in the memory (for pseudo filesystems).
A single inode can be pointed to by multiple dentries (hard links, for example, do this)".

case1: [very few inodes in RAM]: we hit ourself in testicles, because we can not have many files in system ("/" from core.gz loaded into RAM). So more free RAM for the few little programs (files) that demand big memory when they run; like virtual machines, video editors.

case2: [very many inodes in RAM]: we can keep huge number of files/appls in the system, but the memory for them to run is limited (TotMem less prog-size); like a lot of normal programs (vlc, libreoffice, firefox). come on, how many appls do you run simultaneously? because you compile the kernel in background, during listening to music, and see a movie, and... blah..

Anyway linux will swap pages from memory to swap file (in RAM, or on disk) if user used "ALL phisical RAM" [=full 90% TotMem]. If this happens, linux will run slower, and then user can take actions, like manualy delete /home/*caches. /tmp etc. I would prefer case2, because (in emergency) I can delete files from system (/usr/local.. whatever) to free RAM memory for other programs to run.

nick65go:
I think I get it: it is not an easy/general answer. It depends on the AVERAGE size of files on system. A simple (in layman terms) clarification I found at:
https://www.howtogeek.com/465350/everything-you-ever-wanted-to-know-about-inodes-on-linux/

There I saw an example of a ext4 system, with block size=4096 bytes (=4k) and ratio of one inode per 16 KB of file system capacity. so a ratio of inode/TotalCapacity = 4k/16k =1/4. But, in that example, file storage (storage of the inodes and directory structures) has used 28% of the space on that file system, at the cost of 10% of the inodes! So the capacity of files_system has depleted quicker than consuming the inode numbers. Wow, basicaly 28% /10% =2.8 , near 3 times faster consumed block than inodes (in that example).

for core.gz loaded in tmpfs RAM, the block size=4k (kernel page size), so for a ratio inode/capacity =1/3, it results aprox. 4/(1/3)=12 KB average file size asumption. If the files (on average) are bigger than 12KB (each), the space ocupied by them (in ram PAGES or 4k each) will rise to max capacity [90% MaxMem] faster than inode depletes.

Rich:
Hi nick65go

--- Quote from: nick65go on August 11, 2020, 09:18:38 AM --- ... for core.gz loaded in tmpfs RAM, the block size=4k (kernel page size), so for a ratio inode/capacity =1/3, it results aprox. 4/(1/3)=12 KB average file size asumption. ...
--- End quote ---
Not quite. The command:

--- Code: ---grep MemFree /proc/meminfo | awk '{print $2/3}' | cut -d. -f1
--- End code ---
is dividing the number of  1K Bytes  by 3, not the number of  4K Bytes.  So for every  3K Bytes  of MemFree you get 1 inode.

Why divide by 3 ?
It comes down to achieving a balance:
If you run out of RAM, the file system is full.
If you run out of inodes, the file system is full.
If you have a lot of tiny files, you need more inodes.
If you have a lot of large files, you need fewer inodes.

nick65go:
thanks Rich for the correction, of average file size = 3 KB assumption.
how about my post #9 :"if nr_inodes=0, inodes will not be limited" ? any danger/advantage?

Rich:
Hi nick65go
Aside from item #3 in post #9, I can offer 2 comments:
1. The number chosen (for Tinycore releases) should be based on suitability for the majority of people, not specialized hardware setups.
2. You would never sign a blank check and give it to a stranger to fill in the details. The same holds true for  nr_inodes=0.  No surprises.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version