Negotiable
Undetermined
Remote
United Kingdom
Summary: The AI Data Annotator (Code-Quality) role at Outlier AI involves creating and answering technical questions related to software engineering to enhance AI models. The position requires reviewing AI-generated code across various programming languages and providing expert feedback on coding practices and version control workflows. The ideal candidate will have a strong educational background in a relevant field and professional experience in software development. This role emphasizes reliability, transparency, and flexibility in a remote work environment.
Key Responsibilities:
- Creating and answering technical questions about software engineering concepts, coding best practices, and debugging strategies.
- Reviewing and evaluating code generated by AI in languages such as JavaScript, Python, Go, Java, TypeScript, Rust, and C++.
- Analyzing code quality, maintainability, and adherence to real-world engineering standards.
- Providing expert-level feedback on version control workflows, collaborative coding practices, and effective debugging techniques.
Key Skills:
- Bachelor’s degree in CS, Data analysis, STEM or related field from top 200 global academic institutions.
- Proficiency in at least two of the following languages: JavaScript, Python, Go, Java, TypeScript, Rust, or C++.
- Professional experience building and maintaining production-grade software repositories.
- Strong knowledge of Git (or similar version control systems), including experience with branching, merging, and collaborative development workflows.
- Hands-on experience conducting code reviews, debugging complex issues, and analyzing large codebases.
- Outstanding attention to detail and ability to clearly communicate technical feedback and coding best practices.
Salary (Rate): undetermined
City: undetermined
Country: United Kingdom
Working Arrangements: remote
IR35 Status: undetermined
Seniority Level: undetermined
Industry: IT
hackajob is collaborating with Outlier AI to connect them with exceptional tech professionals for this role. Outlier is committed to improving the intelligence & safety of AI models. Owned and operated by Scale AI . We believe AI can only perform as well as the data it’s trained on. That’s why we work with contributors from all over the world , who help improve AI models by providing expert human feedback . This data has led to AI advancements for the world's leading AI labs and large language model builders. We’ve built a best-in-class remote work platform for our freelance contributors to provide valuable, specialized skills, and we in turn strive to provide them with a positive experience based on our core pillars of reliability, transparency, and flexibility.
What You Will Be Doing
- Creating and answering technical questions about software engineering concepts, coding best practices, and debugging strategies to help train AI models.
- Reviewing and evaluating code generated by AI in languages such as JavaScript, Python, Go, Java, TypeScript, Rust and C++.
- Analyzing code quality, maintainability, and adherence to real-world engineering standards.
- Providing expert-level feedback on version control workflows, collaborative coding practices, and effective debugging techniques.
What We’re Looking For
- Bachelor’s degree in CS, Data analysis, STEM or related field from top 200 global academic institutions.
- Proficiency in at least two of the following languages: JavaScript, Python, Go, Java, TypeScript, Rust, or C++.
- Professional experience building and maintaining production-grade software repositories.
- Strong knowledge of Git (or similar version control systems), including experience with branching, merging, and collaborative development workflows.
- Hands-on experience conducting code reviews, debugging complex issues, and analyzing large codebases.
- Outstanding attention to detail and ability to clearly communicate technical feedback and coding best practices.