Show simple item record

dc.contributor.advisorGruenwald, Le
dc.contributor.authorLi, Liangzhe
dc.date.accessioned2017-05-11T14:25:06Z
dc.date.available2017-05-11T14:25:06Z
dc.date.issued2017-05
dc.identifier.urihttps://hdl.handle.net/11244/50770
dc.description.abstractToday, cloud databases are widely used in many applications. The pay-per-use model of cloud databases enables on-demand access to reliable and configurable services (CPU, storage, networks, and software) that can be quickly provisioned and released with minimal management and cost for different categories of users (also called tenants). There is no need for users to set up the infrastructure or buy the software. Users without related technical background can easily manage the cloud database through the console provided by service providers, and they just need to pay to the cloud service provider only for the services they use through a service level agreement (SLA) that specifies the performance requirements and the pricing associated with the leased services. However, due to the resource sharing structure of the cloud, different tenants’ workloads compete for computing resource. This will affect tenants’ performance, especially during the workload peak time. So it is important for cloud database service providers to develop techniques that can tune the database in order to re-guarantee the SLA when a tenant’s SLA is violated. In this dissertation, two algorithms are presented in order to improve the cloud database’s performance in a multi-tenancy environment. The first algorithm is a memory buffer management algorithm called SLA-LRU and the second algorithm is a vertical database partitioning algorithm called AutoClustC. SLA-LRU takes SLA, buffer page’s frequency, buffer page’s recency, and buffer page’s value into account in order to perform buffer page replacement. The value of a buffer page represents the removal cost of this page and can be computed using the corresponding tenant’s SLA penalty function. Only the buffer pages whose tenants have the least SLA penalty cost increment will be considered by the SLA-LRU algorithm when a buffer page replacement action is taken place. AutoClustC estimates the tuning cost for resource provisioning and database partitioning, then selects the most cost saving tuning method to tune the database. If database partitioning is selected, the algorithm will use data mining to identify the database partitions accessed frequently together and will re-partition the database accordingly. The algorithm will then distribute the resulting partitions to the standby physical machines (PMs) that have the least overload score computed based on both the PMs’ communication cost and overload status. Comprehensive experiments were conducted in order to study the performance of SLA-LRU and AutoClustC using the TPC-H benchmark on both the public cloud (Amazon RDS) and private cloud. The experiment results show that SLA-LRU gives the best overall performance in terms of query response time and SLA penalty cost improvement ratio, compared to the existing memory buffer management algorithms; and AutoClustC is capable of identifying the most cost-saving cloud database tuning method with high accuracy from resource provisioning and database partitioning, and performing database re-partitioning dynamically to provide better query response time than the current partitioning configuration.en_US
dc.languageen_USen_US
dc.subjectComputer Science.en_US
dc.titleSLA-Based Performance Tuning Techniques for Cloud Databasesen_US
dc.contributor.committeeMemberCheng, Qi
dc.contributor.committeeMemberDhall, Sudarshan
dc.contributor.committeeMemberKim, Changwook
dc.contributor.committeeMemberMoses, Scott
dc.date.manuscript2017-05-10
dc.thesis.degreePh.D.en_US
ou.groupCollege of Engineering::School of Computer Scienceen_US
shareok.nativefileaccessrestricteden_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record