This project has been to prove the limit of efficiency thread computing can bring to the table. To explore this topic, MobaXterm was used as a Linux and C-based platform, with some test case files of macro-sizes and translated hash codes according to blocks. This setup was used to create a program that translates from a file to its hashcode. By tweaking the thread count that was used to compute the hashcodes, the topic of thread efficiency and its limit on count can be determined. This experiment has its limitations as it was run on a 48-processor-based server, for individual clarification “lscpu” command can be used to determine the CPU count on the machine intended. The result of this command may vary the speed-up and computing time that was recorded by this experiment.
A theory that can be confirmed through this project is that the computing time does not always decrease as the thread count increases. From the pc2_tc0 test case graph for the threads v.s. The time process, it shows an incline at the end with 256 threads. This is caused by the CPU taking more time to create these threads than to execute the function in the thread for the block size. In conclusion, as threads do decrease the computing time, there is a margin to where the CPU will take longer to initiate those threads causing it to be counterproductive.
Now in regard to the speed-up. The speed-up is built upon the value of the computing time for each file. Making the speed-up be in the same relation to the threads as the computing time is. The speed-up does increase just until the margin of threads is hit. After the number of threads crosses a certain value, the computing time will start to decline as it’s creating countless amounts of threads; unnecessary for the given file’s size. Therefore, the speed-up creates a negative parabolic relation as the threads increase. Hitting the peak speed-up at the best thread count (margin) for the file size, and slowly declining thereafter.