Give us a brief overview of your background:

I'm currently in the final year of my undergraduate program at IIIT Delhi, pursuing a BTech degree in Computer Science & Engineering. My passion for programming has remained unwavering, and when the chance emerged to immerse myself in coding through the structured framework of my academic curriculum, I embraced it eagerly. My academic journey has taken me across various domains, yet it was during a break after my initial semester that I was entranced by the realm of Deep Learning.

Soon after, in my second semester, I joined the Laboratory for Computational Social Systems (LCS2). This affiliation offered me the opportunity to dive into a wide array of concepts focused on addressing the crucial issues of Hate Speech and Bias. Simultaneously, I maintained my involvement in multiple open-source projects, primarily ones which I used daily and found issues with. One of my most notable endeavors was my contribution to KerasNLP, where I carried out my project during the Google Summer of Code (GSoC).

Where are you located?

I'm based in Delhi.

What languages do you speak?

I'm fluent in English, Hindi, and Urdu. I can also read Arabic and grasp some of its elements, but my capabilities are limited.

One thing you are looking forward to? On a personal level, I'm eagerly anticipating the next chapter of my life (possibly grad school, if I'm accepted) or returning to my job as a software developer.

As for the direction of the AI industry, I'm really pumped about AI-assisted tools. The concept of improved copilots strikes me as a major game-changer, especially considering how much heavy lifting is already being done by tools like GitHub Copilot.

What issue has your attention in the industry?

I'm super fascinated by this whole data quality dilemma. It's crazy how models are hitting a roadblock not because of computing power, but because of the shortage of top-notch training data. And you know what's equally mind-boggling? Once these models are all trained up, keeping them in the loop with the ever-changing world!!

Why do you contribute to EleutherAI?

So, my journey with EAI all began when I was wrapped up in another research project. I stumbled upon a tweet announcing the launch of the Pythia models, which, by the way, turned out to be a lifesaver for my work (even though that work never reached a solid conclusion :D). While I was poking around the Pythia GitHub repo, I spotted this issue about Bias Eval, and how they were looking for contributors. That's when I made my first-ever contribution, and you know what? The vibe in the server was pretty darn cool. I mean, I saw all these big-shot folks whose research papers I'd totally geeked out over, just hanging out.

So, naturally, I decided, why not chip in on some other projects too, right?

What is important to you about Open Access to ML or LLM’s?

It's becoming pretty darn important to be open about things like where the model's training data came from, how it did its training, the GPU hours it gobbled up, and those in-between steps (major props to Pythia for nailing that, by the way! 😉).

Another intriguing aspect is making these models genuinely accessible to a broader audience. Techniques like pruning, quantization, and breaking models to enhance scalability are all highly crucial, especially considering the immense number of people who rely on smartphones and other low compute devices as their primary means of accessing the internet.

What contribution are you most proud of?

I take the most pride in my contribution to the bias evaluation work within the Pythia model suite. As LLMs gain more significance, our experiments showcasing reduced bias on benchmarks is a valuable overall contribution.