Technical

Imitation Learning: How well does it perform?

With the growing adoption of Imitation Learning (IL), this blog posts goes back and takes a second look at this field, but this time with more mathematical rigor. Specifically, we share a recent paper that provides a taxonomic framework and theoretical performance bounds for IL algos, then dive into into the craft of proving such bounds.

Read
Technical

Vokenization: Multimodel Learning for Vision and Language

How can language models better model human learning? A new technique known as Vokenization addresses this by grounding information from the external visual world.

Read
Technical

How is it so good ? (DALL-E Explained Pt. 2)

DALL-E is an increibly powerful model from OpenAI capable of generating incredibly creative images from a text prompt. In this blog post we explore the transformer part of DALL-E, which is sort of like its brain; it's the component responsible for connecting the world of natural language with our visual world. Specificially this blog will look at questions like "how is it so good ?" and "why are transformers able to express and integrate so much information ?"

Read
Technical

Teaching the Brain to Discover Itself

What if, by analyzing just one, single picture of a brain, we could unravel its inner workings, even revealing subconscious thoughts and hidden feelings? A detailed introduction to the intersection of ML and neuroscience.

Read