Miloš Živadinović – Faculty of Organizational Sciences, Jove Ilića 54, 11000 Belgrade, Republic of Serbia
Keywords:
LLM;
Algorithm coding tests;
Recruitment
Abstract: Usage of programming interview questions which consist of coding is one of the most common approaches when hiring new candidates. Candidates should possess a variety of skills and knowledge in order to solve these assignments properly within the time and memory constraints in order to pass the examinations. With the advent of LLM (Large Language Models) architectures such as ChatGPT, we are able to prove that the most common interview questions are trivial as a measure of knowledge. By comparing a dataset of common programming interview questions with answers generated by ChatGPT, we have shown significant results in favor of ChatGPT as a solution for solving programming interview questions with an acceptance rate of 96.58% which is 46.45% higher than the average. We conclude from these results that the existing practice of programming interview questions is flawed and that significant changes should be made to render it relevant or to abandon it completely in candidate testing.
Download file
LIMEN Conference
9th International Scientific-Business Conference – LIMEN 2023 – Leadership, Innovation, Management and Economics: Integrated Politics of Research – SELECTED PAPERS, Hybrid (Graz University of Technology, Graz, Austria), December 7, 2023
LIMEN Selected papers published by the Association of Economists and Managers of the Balkans, Belgrade, Serbia
LIMEN Conference 2023 Selected papers: ISBN 978-86-80194-79-0, ISSN 2683-6149, DOI: https://doi.org/10.31410/LIMEN.S.P.2023
Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons Attribution-Non-Commercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission.
Suggested citation
Živadinović, M. (2023). Application of LLMs for Solving Algorithm Coding Tests in Recruitment. In V. Bevanda (Ed.), International Scientific-Business Conference – LIMEN 2023: Vol 9. Selected papers (pp. 21-27). Association of Economists and Managers of the Balkans. https://doi.org/10.31410/LIMEN.S.P.2023.21
References
Adamopoulou, E., & Moussiades, L. (2020). An Overview of Chatbot Technology. Artificial Intelligence Applications and Innovations, 584, 373–383. https://doi.org/10.1007/978-3-030-49186-4_31
Arefin, S., Heya, T., Al-Qudah, H., Ineza, Y., & Serwadda, A. (2023). Unmasking the Giant: A Comprehensive Evaluation of ChatGPT’s Proficiency in Coding Algorithms and Data Structures. Proceedings of the 16th International Conference on Agents and Artificial Intelligence. https://doi.org/10.5220/0012467100003636
Barat, M., Soyer, P., & Dohan, A. (2023). Appropriateness of Recommendations Provided by ChatGPT to Interventional Radiologists. Canadian Association of Radiologists Journal, 74(4), 758-763. https://doi.org/10.1177/08465371231170133
Basic Calculator II. (n.d.). In LeetCode. https://leetcode.com/problems/basic-calculator-ii/description
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners (arXiv:2005.14165). arXiv. https://doi.org/10.48550/arXiv.2005.14165
ChatGPT. (n.d.). https://chat.openai.com
Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. de O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., … Zaremba, W. (2021). Evaluating Large Language Models Trained on Code (arXiv:2107.03374). arXiv. http://arxiv.org/abs/2107.03374
Dahlkemper, M. N., Lahme, S. Z., & Klein, P. (2023). How do physics students evaluate artificial intelligence responses on comprehension questions? A study on the perceived scientific accuracy and linguistic quality of ChatGPT. Physical Review Physics Education Research, 19(1). https://doi.org/10.1103/physrevphyseducres.19.010142
Divide Two Integers. (n.d.). In LeetCode. https://leetcode.com/problems/divide-two-integers/description
Dong, Y., Jiang, X., Jin, Z., & Li, G. (2023). Self-collaboration Code Generation via ChatGPT (arXiv:2304.07590). arXiv. http://arxiv.org/abs/2304.07590
Gilardi, F., Alizadeh, M., & Kubli, M. (2023). ChatGPT outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences, 120(30). https://doi.org/10.1073/pnas.2305016120
GitHub Copilot Your AI pair programmer. (n.d.). In GitHub. https://github.com/features/copilot
HackerRank – Online Coding Tests and Technical Interviews. (n.d.). In HackerRank. https://www.hackerrank.com/
Kamal, U., Tonmoy, T. I., Das, S., & Hasan, M. K. (2020). Automatic Traffic Sign Detection and Recognition Using SegU-Net and a Modified Tversky Loss Function With L1-Constraint. IEEE Transactions on Intelligent Transportation Systems, 21(4), 1467–1479. https://doi.org/10.1109/TITS.2019.2911727
LeetCode – The World’s Leading Online Programming Learning Platform. (n.d.). https://leetcode.com/
Li, P. L., Ko, A. J., & Zhu, J. (2015). What Makes a Great Software Engineer? 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering. https://doi.org/10.1109/icse.2015.335
Ling, M. H. (2023). ChatGPT (Feb 13 Version) is a Chinese Room. https://doi.org/10.48550/ARXIV.2304.12411
Liu, K., Han, Y., Zhang, J. M., Chen, Z., Sarro, F., Harman, M., Huang, G., & Ma, Y. (2023). Who Judges the Judge: An Empirical Study on Online Judge Tests. Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis. https://doi.org/10.1145/3597926.3598060
Liu, Y., Le-Cong, T., Widyasari, R., Tantithamthavorn, C., Li, L., Le, X.-B. D., & Lo, D. (2023). Refining ChatGPT-Generated Code: Characterizing and Mitigating Code Quality Issues. https://doi.org/10.48550/ARXIV.2307.12596
Mastropaolo, A., Pascarella, L., Guglielmi, E., Ciniselli, M., Scalabrino, S., Oliveto, R., & Bavota, G. (2023). On the Robustness of Code Generation Techniques: An Empirical Study on GitHub Copilot (arXiv:2302.00438). arXiv. http://arxiv.org/abs/2302.00438
Moradi Dakhel, A., Majdinasab, V., Nikanjam, A., Khomh, F., Desmarais, M. C., & Jiang, Z. M. J. (2023). GitHub Copilot AI pair programmer: Asset or Liability? Journal of Systems and Software, 203, 111734. https://doi.org/10.1016/j.jss.2023.111734
OpenAI. (2023). GPT-4 Technical Report (arXiv:2303.08774). arXiv. http://arxiv.org/abs/2303.08774
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (n.d.). Language Models are Unsupervised Multitask Learners.
Shone, J. (2022). Yes, You Can Make an App Too: A Systematic Study of Prompt Engineering in the Automatic Generation of Mobile Applications from User Queries.
The Skyline Problem. (n.d.). In LeetCode. https://leetcode.com/problems/the-skyline-problem/description
Sorensen, T., Robinson, J., Rytting, C., Shaw, A., Rogers, K., Delorey, A., Khalil, M., Fulda, N., & Wingate, D. (2022). An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). https://doi.org/10.18653/v1/2022.acl-long.60
TopKFrequentElements.(n.d.). InLeetCode. https://leetcode.com/problems/top-k-frequent-elements/description
Top Interview Questions. (n.d.). In LeetCode. https://leetcode.com/problem-list/top-interview-questions/
Touvron, H., Martin, L., & Stone, K. (n.d.). Llama 2: Open Foundation and Fine-Tuned Chat Models.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2023). Attention Is All You Need (arXiv:1706.03762). arXiv. https://doi.org/10.48550/arXiv.1706.03762
Wiggle Sort II. (n.d.). In LeetCode. https://leetcode.com/problems/wiggle-sort-ii/description
Witteveen, S., & Andrews, M. (2022). Investigating Prompt Engineering in Diffusion Models. Cornell University – arXiv. https://doi.org/10.48550/arxiv.2211.15462