Abstract: Regression analysis is one of the most common methods of statistical inference, finding its roots into scientific research from all areas for more than two centuries. It is used widely due to its intuitive way to establish a relationship between observations of different variables, and therefore provide empirical proof for a hypothetical connection, or dependence, between them. Regression is an invaluable tool for both research and commerce alike, and has understandably received much attention from software companies in the past two decades, as they realized the immense potential of computers to improve and facilitate the use of the method. Although the contribution of such software to the use of regression should not be understated, the massive amounts of information that have become available with the rise of the digital age has made it increasingly more time consuming, and at instances near impossible, for machines to derive the estimated coefficients of regression. This is a very computationally intensive problem, and improving the efficiency of the algorithm is crucial to time-sensitive applications of regression. The series of graphics cards introduced in the past two years has found wide recognition as providing an accessible alternative to parallel computer clusters for many applications. The architecture and parallel capabilities of the GPU entail a great potential for an improvement of regression analysis calculations. This thesis introduces a new parallel regression algorithm in CUDA for use on the GPU, and demonstrates that this algorithm is between four times faster for smaller datasets and six hundred times faster for larger, depending also on the GPU architecture.