top of page
cggngfnf_edited.jpg

My Research

I am a third-year Ph.D. student at the Software Evolution and Analysis Laboratory (SEAL) of the Computer Science Department at UCLA directed by Prof. Dr. Miryung Kim.

 

I study active sensemaking for developer tools, by designining AI-based human-in-the-loop systems that align tool outputs with developers’ mental models via customized summarization, speculative “what-if” analysis, contrastive grouping, and inquiry-based “why/why-not” debugging. I apply these ideas to code search, bug finding, taint analysis, and API evolution, using interpretable models with lightweight active learning, and evaluate impact on accuracy, time-to-decision, and mental workload.

Google Scholar

DBLP

Publications

TraceLens: Question-Driven Debugging for Taint Flow Understanding

Burak Yetiştiren, Hong Jin Kang, and Miryung Kim. 2025. TraceLens: Question-Driven Debugging for Taint Flow Understanding. https://doi.org/10.48550/arXiv.2508.07198

Burak Yetiştiren, Hong Jin Kang, and Miryung Kim. 2025. From Noise to Knowledge: Interactive Summaries for Developer Alerts. https://doi.org/10.48550/arXiv.2508.07169

Towards Unmasking LGTM Smells in Code Reviews: A Comparative Study of Comment-Free and Commented Reviews.

Mahmut Furkan Gön, Burak Yetiştiren, and Eray Tüzün. 2024. Towards Unmasking LGTM Smells in Code Reviews: A Comparative Study of Comment-Free and Commented Reviews. In Proceedings of the 40th International Conference on Software Maintenance and Evolution.

Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT

Burak Yetiştiren, Işık Özsoy, Miray Ayerdem, and Eray Tüzün. 2023. Evaluating the code quality of AI-Assisted Code Generation Tools: An empirical study on github copilot, Amazon CodeWhisperer, and chatgpt. https://doi.org/10.48550/arXiv.2304.10778

Assessing the Quality of GitHub Copilot’s Code Generation

YouTube_social_white_squircle.svg.png

Burak Yetistiren, Isik Ozsoy, and Eray Tuzun. 2022. Assessing the quality of GitHub copilot’s code generation. In Proceedings of the 18th International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE 2022). Association for Computing Machinery, New York, NY, USA, 62–71.

Invited Talks

Microsoft PROSE Team

January 18, 2023

"Assessing the Quality of GitHub Copilot’s Code Generation"

The introduction of GitHub’s new code generation tool, GitHub Copilot, seems to be the first well-established instance of an AI pair-programmer. GitHub Copilot has access to a large amount of open-source projects, enabling it to utilize more extensive code in various programming languages than other code generation tools. Although the initial and informal assessments are promising, a systematic evaluation is needed to explore the limits and benefits of GitHub Copilot. The main objective of this study is to assess the quality of generated code provided by GitHub Copilot. We also aim to evaluate the impact of the quality and variety of input parameters fed to GitHub Copilot. To achieve this aim, we created an experimental setup for evaluating the generated code in terms of validity, correctness, and efficiency. The results suggest that GitHub Copilot was able to generate valid code with a 91.5% success rate. In terms of code correctness, out of 164 problems, 28.7% were correct, while 51.2% were partially correct, and 20.1% were incorrectly generated. Our empirical analysis shows that GitHub Copilot is a promising tool based on the results we obtained, however further and more comprehensive assessment is needed in the future.

©2025 Burak Yetiştiren

bottom of page