Logo for Sentient
AI Research Intern
Sentient
Posted 8 days ago
Description

About Sentient

At Sentient, we’re pioneering the decentralized artificial general intelligence (AGI) frontier, breaking free from the constraints of centralized AI models. Our cutting-edge platform is designed to democratize AI development, empowering communities to collaboratively train and control AI models in a truly open and accessible ecosystem.

Fueled by our expertise in distributed systems, cryptography, and AI, we’re building a game-changing environment that fosters open-source development and ensures fair value distribution. Say goodbye to the monopolies of the past – Sentient’s decentralized network promotes model composability and adherence to our foundational principles of transparency, trust, and inclusivity.

Imagine being part of a team that’s shaping the future of AGI, where innovation knows no boundaries, and the collective intelligence of global communities drives progress. Join us on this exhilarating journey as we redefine the AI landscape, unleashing the full potential of trustless, decentralized AGI.

Sentient is backed by leading Silicon Valley venture capital firms including Founders’ Fund, Pantera, and Framework.

Responsibilities

  • Work part-time or full-time during the year with the core Sentient Research team

  • Conduct cutting-edge generative AI research in a fast-paced environment

  • Design new agent architectures to improve end-to-end performance of AI workflows

  • Design, run, and evaluate experiments to improve LLMs on various benchmarks

  • Execute data engineering tasks to curate data for LLM pre-training, fine-tuning, RAG

  • Integrate and evaluate models with multi-modal capabilities for different verticals

  • Read conference papers on generative AI and knowledge retrieval to understand and evaluate new research in the space

  • Replicate, evaluate, and integrate theoretical data-curation approaches, fine-tuning algorithms, and agent architectures from research papers into real products

  • Set up fine-tuning and evaluation pipelines on AWS, GCP, and other compute providers

  • Manage AI workload compute resources and monitoring, keeping track of experiments and assessing results

Required Qualifications

  • Hands-on experience in generative AI research and/or engineering, whether in industry or through academic work during the Bachelor’s / Master’s / PhD degree with corresponding published work

  • Demonstrated expertise in deep learning and transformer models

  • Mastery of Python (PyTorch, numpy, agentic frameworks) for building AI workflows, fine-tuning models, and writing evaluations

  • Strong foundation in data structures, algorithms, and software engineering principles

  • Familiarity with methods for training LLMs (distillation, supervised fine-tuning, policy optimization)

  • Excellent problem-solving and analytical skills, with a proactive approach to challenges

Values

  • Appreciate and pursue deep expertise

  • Embrace extreme ownership and bias for action

  • Want to take risks and act upon ambition with integrity and empathy

  • Pursue relentless innovation and experimentation

  • Invest in personal growth and team collaboratio

Benefits

  • Competitive salary

  • Flexible PTO and WFH policy

  • Top-of-the-line engineers and technology

  • Opportunity to shape the direction of a pioneering open AI platform

More Similar Roles...

    Want more remote roles like this one sent to you?