WelcomeWelcome | FAQFAQ | DownloadsDownloads | WikiWiki

Author Topic: How to netwrok several TinyCore workstations and a server?  (Read 9145 times)

Offline ananix

  • Full Member
  • ***
  • Posts: 174
Re: How to netwrok several TinyCore workstations and a server?
« Reply #15 on: August 28, 2012, 02:09:30 PM »
I have pointed out a potential problem with NFS and distributed computing as its difficult to know the details of how the thread initiaters program setup and true working is, from what i wrote i think the thread initiater can evaluate hi's system and setup himself. But i think a system as your describe would properly work just fine but again the exact details of your system is not availble either only you can judge.

I take you dont mean an example of an online system and batch system given the url i handed to you but of the potential problem i point out. I can give an example but understanding online and batch processing is better for making good solutions and understand systems.

Example by V. Edward Gold, Excuse me for no more follow up for now its to much and im working tired and late.

"Every month, the company needed to process several million records from an input file. Fortunately, each record could be processed independently of all the other records, so there was no restriction on separating the file into multiple pieces and sending each piece to a different CPU for processing. Each machine would read a record from this file, retrieve a data item from that file, use that data to look up an entry in another file, and then write out a record. In the end, the various output files would be combined.

Initially, I used NFS to allow each of the machines to access the lookup file, which had about 10 million entries in it. This turned out to be a huge mistake! Each time a machine read a record from its data file, it needed to do at least 23 seek operations on the lookup table file, using a binary search algorithm to identify the record of interest. All these random reads across NFS were absolutely killing the throughput! Instead of using this method, I copied the lookup table to a local disk on each machine. This copy operation took less than 15 minutes, and then the main processing job could run in two hours instead of 40 hours. The process might have taken 20 times longer using NFS."
« Last Edit: August 28, 2012, 03:15:02 PM by ananix »