Thanks to the latest advances in Generative Artificial Intelligence, we can turn our imaginations into reality. Even if we don’t know how to draw, projects like Midjourney allow us to create drawings or photographs from our explanations.
[El as en la manga de Android: imágenes creadas con Inteligencia Artificial en cualquier app]
However, it is not as simple as that. The key to getting a good AI-created image is precisely how we explain it to them, and the best results are achieved if we write with the limitations of AI in mind and how it interprets our orders. What if we had nothing to write for an AI to draw what we think?
AI that reads minds
That’s what researchers at Osaka University have realized, after a study (PDF) that spanned more than a decade but came out, oddly enough, just when generative AI is on everyone’s radar. the lips.
Researchers recognize that in recent times generative models are successful in recreating realistic images, as long as they are based on sentences that are very faithful to what the user is thinking, which is a challenge. To solve this problem, they propose a new diffusion model capable of reconstructing images from the brain activity of the user.
In the experiments, the researchers obtained data on brain activity using functional magnetic resonance imaging (fMRI), which can generate images of activity in various regions of the brain. The system they created was able to interpret this data to create high-resolution images of what the volunteers were thinking.
En concreto, en las pruebas a los voluntarios se les mostraron varias imágenes, y la IA era capaz de “interpretar” su actividad cerebral y crear una imagen muy parecida a la original que habían visto con sus propios ojos. Los resultados hablan por sí mismos, y aunque las imágenes resultantes no son exactamente iguales a las originales, se mantiene la idea general, como la perspectiva, los colores y la posición de los elementos. Los resultados eran diferentes en cada sujeto de pruebas, ya que cada persona “piensa” de manera diferente, pero todos tenían elementos en común.
In fact, this isn’t the first study to try to ‘see’ the images we think about; The success of these researchers was in the use of a diffusion model which made it possible to obtain these results so faithful to the original. To do this, they had to map specific components of the model with specific regions of the brain.
You may be interested
Follow the topics that interest you