Show simple item record

dc.contributor.advisorTrafalis, Theodore
dc.contributor.authorTahri, Chyrine
dc.date.accessioned2019-05-10T18:24:08Z
dc.date.available2019-05-10T18:24:08Z
dc.date.issued2019-05-10
dc.identifier.urihttps://hdl.handle.net/11244/319700
dc.description.abstractTrading is in the heart of commerce in human history and its evolution is one of the most significant factors in the course of humanity. Consistently profitable traders take every negative or positive trade they make as an opportunity to improve themselves. Further, Reinforcement Learning (RL) is a framework where an agent performs actions over an environment and observes the immediate result; this feedback is used to improve the following action taken and the process starts again. We explore this principle (RL) as a plausible implementation for an algorithmic trader, implementing two different data representations throughout reinforcement learning-based trading scenarios. The first one is representing high, low, and close prices as percentages to the open price, in an attempt to learn price patterns. The second added technical indicators to the price observations, aiming to provide more sophisticated metrics that provide insights of market signals. This approach gives the opportunity to learn market analysis and signals spotting. Both agents learned to wait before selling their shares. The best result for the first agent using Bitcoin prices per minute as input data was to buy and hold rather than to do shorter trades. The second agent behaved similarly, but failed to make positive profit. We found out that the market understanding of both agents was still immature. The mappings of market states to actions is dictated by the policy, but the market does not always respond in the same way. The results show good potential for the approach but financial markets are quite large and complex and the modeling of this environment still presents a lot of challenges.en_US
dc.languageen_USen_US
dc.subjectReinforcement Learningen_US
dc.subjectTradingen_US
dc.subjectBitcoinen_US
dc.titleReinforcement Learning Approach for Algorithmic Trading of Bitcoinen_US
dc.contributor.committeeMemberHougen, Dean F.
dc.contributor.committeeMemberRadhakrishnan, Sridhar
dc.contributor.committeeMemberGonzalez, Andres D.
dc.date.manuscript2019-05
dc.thesis.degreeMaster of Scienceen_US
ou.groupGallogly College of Engineeringen_US
shareok.nativefileaccessrestricteden_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record