Arpith Chacko Jacob - since 1982...

Category: Papers

  Impact of Connection Splitting on TCP Performance

December 05, 2005  

"Technical Report: Impact of Connection Splitting on TCP Performance," Arpith Chacko Jacob, Rahul Pratap Maddimsetty 2005.

http://www.arpith.com/media/papers/tcpcc.pdf

Abstract: In this project, we investigate the effect of splitting a single TCP connection into several shorter connections taking the same route. The main motivation for these experiments is to study the inherent tendency of TCP to favour (i.e. facilitate achievement of higher throughput on) connections with lower RTT. We observe that TCP indeed performs significantly better on connections with low RTT, and hence are able to speedup a file transfer between two nodes by adding one or more intermediate nodes.
The performance metrics used to compare the TCP performance of the direct connection to that of the the split connections are the Total Transfer time, Average RTT and Throughput. We study this effect for various file sizes, link qualities and number of intermediate nodes. Planet-Lab was used to run a large number of experiments for this analysis.

  Ultrasound over USB

December 05, 2005  

"Technical Report: Ultrasound over USB," Arpith Chacko Jacob, Brandon Harris, Todd Bumgarner 2004.

http://www.arpith.com/media/papers/usb-ultrasound.pdf

Abstract— The project implements two designs on the Xilinx Spartan-II FPGA using an FTDI USB1.1 IC. The first design is a simple loopback that receives data from the host via the USB and retransmits it back to the host PC. The second design transmits selected scan data from an ultrasound backend to the PC host, supporting a maximum of 5 frames per second. To achieve acceptable frame rates on a USB1.1 bus, the amount of data transferred was cut down by a factor of four by selecting the sample closest to each pixel. VHDL was used to describe the hardware design, which was simulated with ModelSim and synthesized using the Xilinx tools. Linear interpolation was used to generate a 512 x 512 image from the scanned data.

  Whole Genome Comparison using Commodity Workstations

July 31, 2004  

"Whole Genome Comparison using Commodity Workstations," Arpith Chacko Jacob, Sugata Sanyal, 2003.

http://www.arpith.com/media/papers/jacob03pdsw.pdf

Abstract— Whole genome comparison consists of comparing or aligning two genome sequences in the hope that analogous functional or physical characteristics may be observed. Sequence comparison is done via a number of slow rigorous algorithms, or faster heuristic approaches. However, due to the large size of genomic sequences, the capacity of current software is limited. In this work, we design a parallel-distributed system for the Smith-Waterman dynamic programming sequence comparison algorithm. We use subword parallelism to speedup sequence to sequence comparison using Streaming SIMD Extensions (SSE) on Intel Pentium processors. We compare two approaches, one requiring explicit data dependency handling and the other built to automatically handle dependencies. We achieve a speedup of 10 - 30 and establish the optimum conditions for each approach. We then implement a scalable and fault-tolerant distributed version of the genome comparison process on a network of workstations based on a static work allocation algorithm. We achieve speeds upwards of 8000 MCUPS on 64 workstations, one of the fastest implementations of the Smith-Waterman algorithm.