[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Prediction of Shear Strength in Anisotropic Structural Planes Considering Size Effects
Previous Article in Journal
Three-Dimensional Printed Auxetic Insole Orthotics for Flat Foot Patients with Quality Function Development/Theory of Inventive Problem Solving/Analytical Hierarchy Process Methods
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Electroencephalogram Data Visualization Using Generative AI Art

by
Andrei Virgil Puiac
1,2,
Lucian-Ionel Cioca
3,*,
Gheorghe Daniel Lakatos
1 and
Adrian Groza
2
1
Institute for Research in Circular Economy and Environment ”Ernest Lupan”, 400561 Cluj-Napoca, Romania
2
Department of Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
3
Industrial Engineering and Management Department, Lucian Blaga University of Sibiu, 550024 Sibiu, Romania
*
Author to whom correspondence should be addressed.
Designs 2025, 9(1), 16; https://doi.org/10.3390/designs9010016
Submission received: 26 December 2024 / Revised: 24 January 2025 / Accepted: 26 January 2025 / Published: 30 January 2025
(This article belongs to the Section Smart Manufacturing System Design)

Abstract

:
This study is the result of the need to research the visualization of brainwaves. The aim is based on the idea of using generative AI art systems as a method. Data visualization is an important part of understanding the evolution of the world around us. It offers the ability to see a representation that goes beyond numbers. Generative AI systems have gained the possibility of helping the process of visualizing data in new ways. This specific process includes real-time-generated artistic renderings of these data. This real-time rendering falls into the field of brainwave visualization, with the help of the EEG (electroencephalogram), which can serve here as input data for Generative AI systems. The brainwave measurement technology as a form of input to real-time generative AI systems represents a novel intersection of neuroscience and art in the field of neurofeedback art. The main question this paper hopes to address is as follows: How can brainwaves be effectively fed into generative AI art systems, and where can the outcome lead, in terms of progress? EEG data were successfully integrated with generative AI to create interactive art. The installation provided an immersive experience by moving the image with the change in the user’s mental focus, demonstrating the impact of EEG-based art.

1. Introduction

Recently, the development of artificial intelligence technologies in art design and data visualization has made significant progress and is used in various fields of science and technology. This has made it possible to create complex graphic objects, generate creative content, and automate processes that previously required significant human effort. The processing of large amounts of data, revealing structures and relationships that humans may overlook, has become a reality and allows us to create interactive models, visualizations, and simulations that contribute to a better understanding of complex processes. One area of research actively applying generative AI art is the medical field, which offers approaches to analyzing brain activity. It can be measured using wearable EEG devices, with data processed and interpreted in node-based, generative systems [1]. With this intervention in generative AI systems, images that resemble our EEG data can be generated, and these images can be used for various purposes, including data visualization. Generative AI art has now gained the potential to be generated in real-time [2], thus making it a powerful tool for generating images out of pure noise. Nowadays, EEG technology has evolved into wearable devices that make them convenient for research [3]. However, as this technology evolved, it started to open new research areas, opening new possibilities beyond clinical applications.
Generative AI systems have become more powerful as well, alongside the hardware that supports them, reaching a point where one personal laptop can generate images at around seven frames per second. By exploring this interdisciplinary approach, this paper aims to contribute to the growing discourse on brainwaves in art [4], offering insights into the transformative potential of brainwave-driven creative processes in combination with real-time AI image generation systems. It is an important approach because the process of research on visualizing EEG data through artistic means can make this complex information accessible and understandable, even for non-experts [5]. It would be interesting to open up a discussion based on the effects that this application could have for therapy purposes, for example, in neurofeedback [6]. Neurofeedback therapy can be enhanced with real-time generative AI art, and it can help us achieve a better understanding of our mind. In a clinical setting, we may benefit from better communication from a practical perspective. On the other hand, an enhanced understanding through visualization of this process can result in better communication between patients and professionals. There is a probability of potentially improving receptivity between these two parts. The outcomes can benefit professionals by allowing for a better understanding of the technical aspect, on the patient side, as well as how they relate and engage during the process. This way of visualizing the data could also help address the growing distrust of expert opinions in society [7], where data visualized as intuitive imagery might encourage individuals to reconsider their views.
Furthermore, the application of EEG-driven generative AI systems offers new avenues for creative expression and education. Thanks to these technologies, users can directly interact with their ideas through neural activity, which creates a unique way to implement creative ideas, improving the quality of educational processes and providing opportunities for self-realization [8,9]. This is particularly important for individuals with physical limitations, offering them a new range of possibilities in their means of expression [10]. This could assist in enhancing people’s awareness of the needs of those around them, offering them new methods that could benefit them. In such cases, this technology could serve as an extension of the mind, allowing for self-expression despite physical restrictions and enhancing our understanding of the importance of this possibility. Theoretically speaking, this kind of approach would align perfectly with Descartes’ idea of the mind–body dualism [11]. The EEG technology can act as a non-invasive brain–computer interface (BCI), giving the person access to a new form of input to a system and thereby offering them new possibilities [12].
The information mentioned above is presented in the following sections, starting with a study of the tools and methods we used. The material and methods chapter is split into three parts: one is about the EEG headsets, another is about the generative AI tools used, and the last part is about the combination of both. The parts mentioned have great importance because they describe the tools and systems used and how they work together. Following the materials and methods chapter, this paper proceeds with the results. The results chapter presents the system that was developed using the tools described in the first part of this paper. The system described in the results section was created with great attention to detail. The main purpose was to apply the techniques mentioned in this paper in a practical way. People could use it in order to obtain a glimpse of the technology. The practical project was made using commercially available tools, making it reproducible and customizable. In the last part of the discussion chapter, we present a historical background of the tools and other similar projects that can help place this paper in context.

2. Materials and Methods

2.1. EEG Headsets

It is not uncommon to come to the conclusion that both BCIs and AI are linked to the evolution of technology. It might be noticed that both are tied to technological progress, and also that they seem to have evolved at a similar pace, due to advancements made throughout time. Significant progress is still being made at the present moment, as it is in a continuous process of evolution.
The possibility of studying the tools and systems that are on the market at the moment can offer a better understanding of the potential this kind of work could have on our society.
Starting from the simple yet fascinating fact that up until the 21st century, EGG technology was accessible only to specialists in the field, EEG headsets emerged out of the laboratories and clinics and became available to the masses as therapy devices. It is true that the ones accessible are not as accurate as the EGGs used in labs, with many of them limiting access to the raw EEG data [13], but they sure can offer information based around the mood of the one that is using it, for therapeutic purposes. They can be used to monitor emotions, sleep, depression, and fatigue [14]. The EGG technology accessible to people can offer information such as if the person using it is in a state of anxiety or calmness. One of those devices is the Muse headband, made by Interaxon (Toronto, ON, Canada) in 2014. It was made for meditation, giving users real-time audio feedback of their mind state so that they can adjust accordingly, and it tracked progress over time. The second version of this device, Muse 2, still made and sold today, looks like a headband and is meant to be put on the head as is, with no medical knowledge needed. The Muse 2 headband is composed of 4 main electrodes and 3 reference electrodes for noise reduction [15]. The user does not require any medical knowledge to place them properly, because they are built into the headband. It has no wires, and it connects to the user’s phone via Bluetooth, making it easy to use for most people. Because of that, and the accessible price, it gained a large community of people who use the device as intended. This accessibility also appealed to a new group of people interested in the applicability of this technology. Some developers started to study its capabilities to create apps and develop BCIs, also including art making. The community shares its projects on GitHub and some even make tutorials on YouTube, with one of them being on the interactive and immersive HQ (a big page dedicated to generative art made in Touchdesigner99), member Crystal Jow, making an interactive installation in February 2023 [16]. At a pilot exhibition project in London’s Courtauld Gallery in 2023, visitors were asked to use the Muse 2 headband while they looked at art, and the data collected were then visualized in real-time and studied [17]. People were able to move around the museum because of the portable nature of this device, and the EEG itself was called a “small wireless headset” [18]. Other interesting and promising companies to follow on this matter are Neurosky, Emotiv, OpenBCI, Starlab, and Mindo. In this project, we focused on the Muse S, the EEG headset in Figure 1 below. The Muse S headband was selected for its balance of accessibility, cost, and performance. Compared to other devices like Emotiv Epoc, and NeuroSky, Muse S provides adequate signal quality and user-friendliness, despite having fewer electrodes.
One important step in these systems is to collect and process the brainwaves. If this process is not performed properly, we will end up interpreting false data at the end, so it is crucial to ensure that the data gathered are correctly collected and that they are usable. The electricity produced by the brain is not much, being measured in microvolts (uV), with 1 uV being 0.000001 V. It takes the shape of a wave with a frequency measured in Hz, and both the amplitude and the frequency must be processed to gain an understanding of one’s state of mind [1]. The frequency band is split into 5 different parts (Delta < 4 HZ, Theta 4–7 HZ, Alpha 8–16 HZ, Beta 16–31 HZ, Gamma > 31 HZ), and each has a certain level of awareness, from low to high [19]. This is the general state of the brain, and other quantitative studies suggest that certain asymmetry in the brain may further help determine the emotions one is feeling [20]. This information is useful for setting thresholds and filters for further processing.
With this low-voltage and high-frequency signal that our brain is emitting, it can be easily mixed up with noise from the ambient, muscle movement artifacts, small electric discharges from other devices like headphones, and so on. The EEG will most likely pick those up along with the brainwaves, and the signal needs to be processed. Those wearable EEGs that are now on the market can have a noisy signal since they are most likely not used in a clean environment with proper supervision, and most companies that sell those devices fix this with software. This is good for the average consumer, as mentioned earlier in this paper, and should be open to people who do not have a lot of technical knowledge. The fact that one can connect the EEG to a phone with Bluetooth with basic instructions is proof that this tool is becoming accessible and is evolving. The first thing to do in order to have a good signal is to make sure that the EEG headset sits tight on the head by adjusting it based on the instructions [21]. All the electrodes need to touch the scalp; if there is hair in between, we might have noise in the signal. This is all the information that the users need to know. This process is explained even with images and blinking indicators in the software, to make it accessible and understandable for the users.
A developer, on the other hand, might have to denoise the data. A general method of carrying this out is removing the noise from the signal by having another control electrode that collects noise, reverses its signal, and subtracts it from the others. Most commercial-grade 167 EEG headsets come with applications that handle this technical problem pretty well. For artistic experimentation, it is good enough. For other systems, we can use the techniques described by Ian Shelanskey, a known Touchdesigner99 programmer and consultant. He made a course about denoising signals in generative environments. He explained how setting up a threshold and using successive filters and logic gates can help obtain good signals, ready to be used in interactive installations and generative projects [22]. After we measure the brain activity with the EEG and obtain a good signal, we can feed the signal in generative AI systems without much effort. Many wearable EEG headsets allow OSC (open sound control) streaming to any device with just an IP and a port number. This stream can then be accessed by many programs, including the industry standard Ableton for live music production, VCV rack for modular synthesis, and Max and Touchdesigner99 for live visuals [23]. This allows the user to have a lot of possibilities in terms of what they choose to do with their brainwaves, making it easy to use since no code is required. Some companies even process the signal in the cloud before streaming it to the desired device and software. Since it is personal data, the user must agree with the terms and conditions given by the service provider. In the case of using the Muse headband, we can obtain both raw data and slightly processed data depending on our choice. The processed channels give us data for the Delta, Theta, Alpha, Beta, and Gamma sections of the spectrum with a slight approximation of their intensity, so we can measure the awareness level. The raw data contain the EEG itself—the waves of all the electrodes. Figure 1 shows the OSC stream with all the data going into Touchdesigner99 on the left, with only the processed channels on the right. The data in Figure 2 were based on brain activity, taken at the time of writing this paper.
In the future, there is hope to have AI process this signal for us, by cross-referencing the data we gather with large datasets, to have more powerful and accessible software for developing art with EEG. A company called Bitbrain said that the EEG data are “not easy to interpret it has a lot of noise, varies significantly between individuals and, even for the same person, changes substantially over time” [24] and they proposed to use deep learning for this in 2020. More companies joined this quest, but more research is needed since AI itself is still not fully understood. To create art, an experiment could be made with the EEG by wearing it out during the day and observing the data collected, gathering information on general moods based on the activities being carried out. It is important to tweak the algorithms until the results are appropriate and usable. For example, in the case of a trade show where many people are trying out an EEG headset, a quantitative approach could be considered [9]. Programming it to fit multiple individuals at a time may bring up the risk of losing the quality of the signal. We hope for better-trained AI and larger datasets in the future because of this easy-to-use technology. In the present moment, there is a need for awareness, overcoming the possible technological limitations.

2.2. Generative AI Art Systems

AI image generators have a short history, as mentioned before, and progress is made every day as more powerful hardware, smarter learning and generating models, and larger datasets are being developed. One of the most versatile models that are out there is called “Stable Diffusion”, released in 2022. It is the most accessible, popular, and easy to use for image generation and training because it is open-source and it encourages people to work and learn together, being transparent by nature [25]. Other tools on the market today require payment, have some sort of limitations, or are not as easy to work with. This model called Stable Diffusion runs offline; people can download models trained by others since it is open-source and can even train models with their images on their own laptop at no cost. Besides that, the Python 3 code can be modified to integrate it into other software. Elburz Sorkhabi, a generative artist based in Toronto, even explained how to make an API for Stable Diffusion so it can work with Touchdesigner99 [26]. Others made it even easier by putting the plugin online. Akio Kodaira, Chenfeng Xu, Toshiki Hazama, Takanori Yoshimoto, Kohei Ohno, Shogo Mitsuhori, Soichi Sugano, Hanying Cho, Zhijian Liu, and Kurt Keutzer managed to create a GitHub repository called StreamDiffusion, managing to create images in real-time. Stream diffusion uses pyTorch, Conda, and a Stable Diffusion pipeline to generate 7 frames per second. Lyell Hintz went on to create an API for Stream Diffusion and managed to obtain real-time AI images in Touchdesigner99. This is a major achievement in the field of live visuals since they can now be AI-powered.
It was a matter of time until interested people would fill the forums with this integration. They started to make tutorials on how to use it and also got creative with their projects based on this breakthrough. One of these people was Bileam Tschepe from Berlin, known for his educational activities and tutorials that he posts on the internet [27].
Generative AI art represents a new class of tools that can produce high-quality artistic media for visual arts, concept art, music, fiction, literature, video, and animation in an automated manner [28]. In the following part of this paper, the focus is on the open-source image generation model Stable Diffusion.
A local version of StreamDiffusion is an image generation model that is not running online; instead, all the images are generated by the user’s graphic card and all the data needed are on their hard drive. This allows the user to modify data models and write custom code, giving it more space to experiment, and since servers can be crowded, this is a good way to ensure that the system runs smoothly. Custom code can integrate Stable Diffusion in other programs, like Touchdesigner99, as mentioned earlier, through API.
With those tools, we have the ability to create systems that create generative Art and images, and with the StreamDiffusion integration, it can process those. This way, it can generate interactive installations using all kinds of input devices such as the Kinect, webcams, Arduino sensors, sound, and so on but with a result that is as complex as artificial intelligence can give us. This is where things might get a bit heavy on the hardware since all that processing can stress our machines, especially if it is in real-time. In this case, Touchdesigner99 and the Stable Diffusion integration mentioned earlier were used. Touchdesigner99 is where everything meets, being a powerful generative art-making software we can use to bridge data and image generation. In this project, we used Touchdesigner99 version 23.12120. The OSC signal is a constant flow of numbers, so for that, we used a channel operator that deals with data only [29]. Each channel has a numerical value, and that value is translated to parameters in other operators that deal with images or 3D shapes. For example, the channel’s information can be assigned the size of a circle, and a circle will be created that changes its scale according to the numbers in that channel. That information could come from the EEG’s Delta channel, so each time a person is relaxed, the Delta frequency is more predominant in the EEG data, making a larger circle form. This is a simple example, but with more channels and processing, it is possible to obtain more complex results.
Since Stable Diffusion integrals are obtained, it is possible to control the prompt in the model, the seed, the number of steps, and so on with the EEG channel. The prompt is a text description of the image it generates, and it is possible to change prompt texts using the data from the EEG, essentially generating different images for each state. The seed is a number that is used to initialize the denoising algorithm, and it is used to make different images but with the same prompt. The number of steps controls how much denoising is being performed to achieve the desired image, with more steps resulting in a better-quality image. But StreamDiffusion has an image-to-image generation model as well, so it is possible to feed all sorts of images created with the EEG data and create from there. Using data and artificial intelligence, it is as simple as assigning channel values to what we want. Figure 3 shows an example with the circle, with the data on the right, the circle in the middle, and the same circle after being slightly processed by AI, with the prompt that is set by default when opening the program.

2.3. EEG Signals in Generative AI Art

As described above, both the EEG and artificial intelligence have become available to the masses, and this is the most interesting part of it. Everyone should be able to enjoy experimenting with those tools, and this democratization of them is what makes our future promising regarding research and advancement in the field. Artworks created by people affected by accidents using BCI technology were highlighted with the first exhibition of its kind [10]. There is only hope that AI will bring even more possibilities for these people and help them express themselves as freely as they wish to.
The project was made using the wearable EEG headband, Muse S. It is reliable enough to measure the activity level of the frontal lobe while being easy to use on multiple participants since it is a headband. The Muse S headband was connected to a phone with the use of Bluetooth. The phone streamed the data to the laptop with the open sound control (OSC) protocol. The data were streamed in Touchdesigner99, where it was processed and fed into a real-time AI image generator implementation. The “absolute” value of the signal shows how much of one specific wavelength is present in the brain, and Muse S also streams that, so it was chosen to use them since processing raw EEG data is more complicated.
The “absolute alpha” value shows the attention level, since the alpha brainwave is more present in reflection and problem-solving creativity [30]. It is a good brainwave type to address in an art exhibition, since others deal with sleep, deep meditation, light sleep, and even anesthesia. Beta waves deal with high alertness [31]. The person taking part in this project should not be asleep or alert. Alpha brainwaves are the perfect middle ground from an empirical observation made throughout the process.
In Figure 4 below, we can see the node-based program in Touchdesigner99. The green boxes are the channel operators. They deal with numbers and signals. In this case, the green boxes show the EEG signal. In the blue boxes, it is possible to see the texture operators that deal with images. Those operators carry specific types of information from one another, in order for that information to be processed. The dotted lines are where the EEG data interact with the texture operators in order to modify parameters. The first texture operator generates a color noise pattern that is used as a base for AI to work with. The alpha wave modifies the saturation level and the scale of the noise.
When the noise is being fed into the real-time AI implementation, the stream diffusion mentioned earlier in this paper is used to generate flowers. There are two prompt boxes, and each of these boxes has a weight to it that is controlled by the alpha brainwave received from the EEG. The weight determines how much one specific prompt affects the images that are being generated, on a scale starting from 0, meaning it will not have any influence, to 1, where the prompt is going to have a large influence. Based on the value placed there, it will manage to control what type of images the user wants to generate. The first prompt box is for when the participant is aware, and the other is for when the awareness level drops, with the first prompt being “flowers, colors, Kodak” and the other one being “deaf flowers, black and white film, sad”. The same parameter is being used to modify the weights, the absolute alpha value, and only one of them has the scale reversed so that one drops when the other one rises. Figure 5 presents the prompt boxes and the weight values of each. In that image, the alpha wave is not dominant, as it represents only 15%. The weight of the second prompt box is higher, telling the AI to generate more dead flowers than bloomed ones.
The system is not excessively complex in nature, but its simplicity makes it accessible and understandable. This is one part of the desired outcomes of this project, as it should be able to perform in multiple circumstances and to multiple people. Since AI is so “smart”, by modifying the prompt boxes, it is possible to obtain totally different images in all sorts of styles, so this algorithm can be slightly modified to generate new content without adding new nodes or creating new shapes and functions. The hard work is done by the laptop instead, since it generates images locally, meaning that it relies only on its power—no online services or application programming interface. The computing power can sometimes set a limit in this part of the process. On a powerful laptop, it manages to achieve around 7 frames per second, and this offers functionality. It is not a 24-frame-per-second video. The laptop in use must create a total of 7 new images that might never have existed before. These images are created from colored noise, while also taking the EEG data, which demonstrates the capacity it can have. We can observe how far technology has come with the ability to visualize the brainwave data with real-time AI-generated images on normal computers, hopefully gaining more power from now on.
While EEG-driven generative AI has been explored in neurofeedback and brain–computer interfaces, this project emphasizes a novel integration focused on accessibility and practicality. By leveraging commercially available tools, such as the Muse S headband and Touchdesigner, the system demonstrates how these technologies can be adapted for real-time, user-friendly applications. Unlike previous studies, this work prioritizes creating an interactive, immersive experience for non-experts, bridging the gap between technical research and accessible visualization.

3. Results

By following the steps mentioned above, the project was able to demonstrate the use of this system that could visualize the EEG data. With the use of generative AI art, it displayed the real-time AI-generated images on a screen and the EEG data on a smaller screen right below it so that the user could see the connection between the data and the images. Figure 6 presents a rendering of the setup. The space was carefully chosen for this project in order to give that sensation of privacy of a specific state of comfort in which the participants would be able to feel free to let their mind wander. It must be taken into consideration that the project was taking place in a small room to create a well-balanced atmosphere for the participants. For the given circumstances, the chosen setup was sufficient in giving a sense of security and discretion. One by one, each person got the chance to try the headset and use it as much as they needed, giving them space to experiment with the installation.
It was presented as an interactive installation, as shown in the images above, and people could interact with the system by putting the EEG headband on their heads. The process of observing the subjects was necessary the whole time in order to make sure that the installation worked as intended. Figure 7 and Figure 8 are screenshots of the data we monitored. There was also the need to explain to the participants the whole process and to instruct them on what they should expect to see. People engaged quickly with the idea. They were trying to conduct themselves in a way that would allow them to present both states. There were multiple attempts to shift their mental focus by calming down, focusing on the colors of the flowers, or trying to zone out. About half of the people present managed to interact with the system by performing those manipulations, and for others, it seemed difficult to shift their state of mind, and therefore the generated images were showing similar results. Overall, this interactive experience seemed to be an enjoyable experience for those who interacted with it.
Another observation made throughout the process of observation on the project was the appearance of technical problems like noise in the EEG signal. Those few times the EEG had a weak signal were mostly based on poor skin contact, usually from factors like hair between the electrodes. To minimize noise and ensure clean EEG data, several steps were taken. Proper placement of the Muse S headband was prioritized, with specific attention to electrode contact with the scalp to avoid interference from hair or poor skin contact. For good skin contact, it is necessary to check if the EEG headband is correctly positioned. It is important that it sits tight on each person’s head, adjusting it if needed. The EEG was cleaned and properly inspected in order to ensure it felt safe, making sure everyone was comfortable during their participation in the project.
Taking into consideration the ethical implications of the project, from the data point of view, it is important to mention that this subject was treated in a very serious manner. The data used during the project were not stored in any way or form after the experience of the participants ended. As described, the data were only collected while the headband was placed on each individual, being streamed into the program for the functionality of the project. Data privacy was ensured by processing EEG signals in real-time without storing or transmitting personal information. Participants were briefed on the purpose and limitations of the system, with explicit consent obtained before involvement. Every participant in the project was informed about this aspect, to ensure their privacy and to create a safe environment for the experience. Starting from the intimate space chosen, to the data used, the project ensured privacy, stability, and a trustworthy atmosphere for each participant. The interactive nature of the installation raises important questions about unintended profiling and the privacy of visual outputs generated by AI. Future work will explore guidelines for consent, data security, and responsible use in artistic and non-research contexts.

4. Discussion

Background and Historical Aspects

The EEG was first used on humans in the year of 1924, when Hans Berger expected a metabolic release of “energy in the form of localized heat and electrical currents, leading to an increased understanding of normal and disturbed mental processes” [31], tried to observe the relationship between brainwaves and brain diseases, from people with skull defects, and first described alpha waves and beta waves [32]. Hans Berger obtained inconsistent results, but it was a large revelation at the time in the field of mind studying. This year will mark a century from those experiments that led to the advancements that are known today, using the EEG to control computers.
This type of interface, called BCIs (brain–computer interfaces), first started to be researched in 1973 by the Advanced Research Project Agency [33]. It started with a 1973 paper called “Toward Direct Brain-Computer Communication”, by Jackques Vidal, where this concept was presented, but it took another two decades for it to be tested on humans [34]. In [35], it was stated that “the Brain Computer Interface project, [...] was meant to be a first attempt to evaluate the feasibility and practicality of utilizing the brain signals in a man-computer dialogue while at the same time developing a novel tool for the study of the neurophysiological phenomena that govern the production and the control of observable neuroelectric events”.
Fifty-one years later, at the time of this paper, people can make their own BCI, and it works surprisingly well. Nathan Copeland set the record for living with an implanted BCI, also being one of the first to join a study for people with spinal cord injuries, at the University of Pittsburgh in 2014 [10]. The technology evolved and became available to artists, as more and more started to experiment with it, including Marina Abramovic, Refik Anadol, Yehuda Duenyas, and Eduardo Miranda.
In 2023, in Washington, the American Association for the Advancement of Science (AAAS) and Blackrock Neurotech opened the first BCI exhibition, called “BCI Exhibit”, a first of its kind. Nathan Copeland, mentioned earlier, also exposed some drawings there, made with his BCI [26]. This is proof of how far humanity has come, and the potential involved here.
AI has a similar history as the EEG, not made to read the brain but to expand its processing capacity. In 1945, Vannevar Bush proposed a system to amplify the mind. Vannevar Bush described the AI experience as being able to “talk his thoughts to a machine” [36], highlighting the similarity between the BCI and the AI. In the same journal, Vannevar Bush published a drawing of a man with some kind of device on his head, much like an EEG. At that time, they were expressing an interest in thoughts and decision-making processes, but the approaches were different. The use of AI in decision making was possible only more than 50 years after Alan Turing made the prediction that a computer could play chess [37]. Even though the first AI was capable of resolving simple tasks, the first AI to generate art was AARON. Made by the artist Harold Cohen in 1968, it consisted of a series of points that were connected based on true or false gaits, representing a simple way to generate digital drawings [38].
Since then, the hardware has become stronger and the software has become smarter, bringing the possibility to generate images using more than points, giving the computer a mind of its own. But a mind must learn to be smart and capable of great things, so it was not until deep learning that the AI could start to understand and generate complex images. That took place in 2009 after image.net created a big image training dataset with AI [39]. Since then, interest in this field has increased with 2017 Progressive GAN and StyleGAN, from NVIDIA gaining a lot of attention.
NVIDIA is one of the big names in the graphic card market, and it invested great resources to improve its technology and develop artificial intelligence that generates graphic content. Lev Manovich calls the period after 2010 “the quantitative turn” in art, because of how art was processed and made in this period, describing the fact that it is a matter of quantity now in production and research [40]. This proves how much those artificial minds accumulated until 2020, the time of the writing of his book. Shortly after, in January 2021, DALL-E, a powerful text-to-image generator, was released, and, in 2022, Stable-Diffusion and Midjourney followed.
At the time of writing, AI can generate real-time video, a technological achievement that was difficult to imagine even a few years ago. It is only reasonable to expect an even more advanced future.
Some machines could potentially read our minds, the EEG, and artificial intelligence could think for us. It is interesting to see how machines have become more humane because we taught them to think, and that humans have become more cyborg-like with all the technology that is on or sometimes in their bodies. Merging that artificial mind with our brain data is an idea described in a scientific paper in 2022, describing a pipeline that processes the EEG data and feeds the resulting emotion into an image generation model, a GAN trained on paintings [41]. The pipeline that they described is a simple yet powerful tool, and it can be performed with minimal computer knowledge using the software and hardware that are available on the market today, which can be observed in Figure 9.
The ascent is a performance created in 2011/2012 by the Emmy award-winning director Yehuda Duenyas. Specializing in next-generation entertainment technology, it is a good example of how it is possible to create art with the EEG, being the first of its kind. In this experience, the user was invited to wear an EEG headset and a harness that would lift if a good mental state is registered. This would act like a simple slider mechanism, where the lower the brainwave frequency is, the faster it would lift the person and some rumbling base and blue light would be activated [42]. It was literally lifting a person off the ground if that person had a quiet state of mind, giving a literal meaning to the weightless state that meditation can give, using technology. This is one good example of how technology is an extension of the body and ourselves since the mind [43] is constantly being analyzed, with the environment and harness responding accordingly. Mirjana Prpa and Philippe Pasquire stated in their book about BCI that this performance has “Reactive Input Agency—employs brain activity that is altered by an external stimulus. The participant is simply attending to the stimulus” [4]. The user must break the pattern of actively observing and let go of their attention to the surroundings if they wants to reach that good state of mind and experience the liftoff. The fact that one can reach the top reflects the ability to calm down and focus; the person who did not manage to lift off may suffer from anxiety or distraction that could interfere with their calmness. Each person should be aware of the fact that they expose these states of mind by taking part in this installation, revealing things about themselves. This interactive experience demonstrates how simple EEG measures can control many devices, such as light, sound, and even a custom-made system that lifts a person, and it also gives meaning to the elevated state of mind achieved with meditation. Powerful technology was used to achieve an equally powerful experience, and it was carried out in 2012 when wearable EEGs just started to appear on the market and AI was in its birth stage based in digitalization [44,45,46,47,48].
Refik Anadol, a California-based Turkish artist, is internationally known for his abstract data visualizations, and his work received important awards [49]. This work was mainly based on the need to explore the possibilities of artificial intelligence in many ways. The work represents a relevant example for this paper since the “Melting Memories” installation combines both the EEG and AI. Refik Anadol calls them “Data Paintings” since his work is data-driven, and even quotes the American philosopher John Dewey by saying that “Science states meanings; art expresses them” [49]. Those types of “data paintings” were found to be an interesting symbiosis between the artistic and the scientific words. The understanding of their intertwined nature is beautifully highlighted in [50]. Refik Anadol even rhetorically asks himself if “Data can become a pigment”, implying that data are the building block of art.
“Melting Memories” is an installation meant to visualize the data extracted from a 32-channel Enobio EEG in an artistic way, and artificial intelligence is used here to process the data. The idea was sparked after the artist’s uncle, who had Alzheimer’s, was losing the ability to process memories. “Melting Memories” was meant to help “visualize the moment of remembering” [51].
The project was made in 2018, a year in which AI image generators were just experimental GANs, so the use of AI was limited to signal processing. The EEG data were processed with AI and the visual output was generated with the VVVV (procedural visual live programming environment for generative art) procedural system. That resulted in an output that can be defined as generative art rather than AI-generated art because he essentially controlled a generative system with the data gathered and processed [51]. It is important to mention the fact that he worked with the scientists at the Neuroscape Laboratory at the University of California to process the signal properly.
Refik Anadol published screenshots on his website from his creative process, from the data gathering and processing to programming, and shared the creative process of “Melting Memories”. In one image, the EEG that was used can be seen as well as a spectral map of a screenshot from the VVVV with the data coming in going into the procedural noise texture that was used to displace a plane.
Neuroknitting is “an interactive artwork that brings together biometric data, craft, digital fabrication, interaction, and data physicalization and sensification” [52]. In this interactive artwork, as they call it, the artists recorded the brain activity with a wearable EEG headset, a 14-channel Emotiv Epoc, and transformed them into knitting patterns, producing textiles with a timestamp of the brain activity on them. They brought data into being by knitting patterns that can be worn. Here, the artists recorded the brainwave of a monk who listened to an interpretation of the music composed by Beethoven. After exposing the subject to the effect made by the music, the data collected were used to drive knitting machines. The machines were moving faster or slower, based on the brainwave activity recorded earlier, while listening to music. In addition to that, the monk-controlled pre-made AI, StyleGan2, generated visuals. StyleGan2 is a precursor of the AI image generators used today, and this interactive artwork is an example of how it is possible to use brains to control actual AI-generated images [53,54]. Even though the images are not generated in real-time, i.e., the image sequence has already been created, the user can still control the frame rate that is displayed on the screen. Overlaying the AI-generated image, there is a circle that visualizes the sound, by the amplitude of its displacement.

5. Conclusions

This paper’s main intention was to showcase the results of the research on brainwave visualization with the use of generative AI systems. The aim was to demonstrate how brain activity can be measured, and how that data can be processed and interpreted to manipulate parameters in node-based generative AI systems. This paper needed to use step-by-step descriptions of the process, while using the literature framework, for clarity regarding this subject, as it can get very technical at some points. Therefore, this paper attended to the theme by gradually explaining the implications and details of these emerging technologies. The thesis discussed and explained the purpose of EEG art, starting with a brief introduction of the technological advancements of this topic and starting from the benefits in the medical field, highlighting the importance of these advancements made in the field. It is relevant to take into consideration how this progress opened new areas of research and expression. Another important point made by this paper was the contribution and evolutions made regarding the BCI (brain–computer interface), representing one of the major starting points. A correlation between technologies was needed, highlighting the tie between BCI and AI (artificial intelligence). There, this paper addressed briefly the comparison and explanation of how these two expanded and evolved similarly. It is true that all of these, the EEG, BCI, and AI, have a short history, as they are still evolving to this day.
Another fundamental point that was made was the start of a discussion regarding AI. The evolution of a recent AI image generator from 2022, “StreamDiffusion”, is an important part that was the beginning of a new set of possibilities for people interested in this field. After briefly mentioning technological advancements, it was explained in a detailed manner how the EGG signals were collected. A proper method of gathering the data with the EEG was fundamental. The data collected needed to be carefully monitored, paying great attention to the people who interacted with the headband. Caution was needed, as details were very important. The correct placement of the headband, carefully collecting the necessary data for the headband, and supervising the process assured correct results, ensuring credibility for the investigation. A thorough explanation of the process was extremely important, as there is a need for transparency for future evolution. The methods used needed to be as detailed as they could to avoid a superficial approach towards this matter.
While this study was conducted in a controlled environment, its findings provide a foundation for scaling to more dynamic applications. Potential use cases include public art installations, where adaptive filtering could address challenges of signal noise in busy settings. Similarly, clinical trials could benefit from real-time visualization systems for therapy and diagnostics. Scalability concerns, such as maintaining system responsiveness to diverse user states, highlight the need for further research into AI-driven signal processing.
A real-life example was presented to demonstrate the entire process, which ensures the transparency of the process of working with these tools and concepts. The creative process and the possibilities of technology were also demonstrated, allowing for an open discussion on the impact and importance that this area of research can have.

Author Contributions

Conceptualization: A.V.P., L.-I.C. and G.D.L.; methodology: A.V.P., L.-I.C., G.D.L. and A.G.; validation: L.-I.C. and A.G.; investigation: A.V.P., L.-I.C. and G.D.L.; resources: L.-I.C. and G.D.L.; writing—original draft preparation: A.V.P. and G.D.L.; writing—review and editing: L.-I.C. and A.G.; supervision: L.-I.C. and A.G.; project administration: L.-I.C.; funding acquisition: L.-I.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant from the Ministry of Research, Innovation and Digitization, CCCDI-UEFISCDI: 86PHE/22/10/2024 from PN-IV-P8-8.1-PRE-HE-ORG-2024-0220/CIRCULAR WATER (RevHydro).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jobst, B.C. EEG Manual for Residents and Fellows; Dartmouth-Hitchcock Medical Center: Hanover, NH, USA, 2005; Available online: https://www.crossroadsacademy.org/crossroads/wp-content/uploads/2016/05/EEG-Manual.pdf (accessed on 13 June 2024).
  2. Kodaira, A.; Xu, C.; Hazama, T.; Yoshimoto, T.; Ohno, K.; Mitsuhori, S.; Sugano, S.; Cho, H.; Liu, Z.; Keutzer, K. Streamdiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation. arXiv 2023, arXiv:2312.12491. [Google Scholar]
  3. Krigolson, O.E.; Williams, C.C.; Norton, A.; Hassall, C.D.; Colino, F.L. Choosing MUSE: Validation of a Low-Cost, Portable EEG System for ERP Research. Front. Neurosci. 2017, 11, 109. [Google Scholar] [CrossRef] [PubMed]
  4. Prpa, M.; Pasquier, P. Brain-Computer Interfaces in Contemporary Art: A State of the Art and Taxonomy. In Brain Art: Brain-Computer Interfaces for Artistic Expression; Nijholt, A., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 65–115. [Google Scholar] [CrossRef]
  5. Few, S.; Edge, P. Data Visualization: Past, Present, and Future; IBM Cognos Innovation Center: Washington, DC, USA, 2007; pp. 1–12. [Google Scholar]
  6. Hammond, D.C. What is Neurofeedback? J. Neurother. 2007, 10, 25–36. [Google Scholar] [CrossRef]
  7. Nichols, T. The Death of Expertise: The Campaign Against Established Knowledge and Why It Matters; Oxford University Press: Oxford, UK, 2017. [Google Scholar]
  8. Hrinchenko, H.; Trishch, R.; Mykolaiko, V.; Kovtun, O. Qualimetric Approaches to Assessing Sustainable Development Indicators. E3S Web Conf. 2023, 408, 01013. [Google Scholar] [CrossRef]
  9. Hrinchenko, H.; Didenko, N.; Burbyga, V.; Lesina, T.; Medvedovska, Y. Ensuring Sustainable Education through the Management of Higher Education Quality Indicators. E3S Web Conf. 2024, 558, 01029. [Google Scholar] [CrossRef]
  10. AAAS. Creating Art with Thought Alone: First BCI-Generated Art Exhibit Opens in Washington, D.C. AAAS Art of Science and Technology Program, 5 April 2023. Available online: https://www.aaas.org/news/creating-art-thought-alone-first-bci-generated-art-exhibit-opens-washington-dc (accessed on 13 June 2024).
  11. Descartes, R. Meditations on First Philosophy; Broadview Press: Peterborough, ON, Canada, 2013. [Google Scholar]
  12. Millán, J.d.R.; Galán, F.; Lew, E.; Chavarriaga, R. Non-Invasive Brain-Machine Interaction. Intern. J. Pattern Recognit. Artif. Intell. 2008, 22, 657–681. [Google Scholar] [CrossRef]
  13. Gao, Z.; Cui, X.; Wan, W.; Qin, Z.; Gu, Z. Signal Quality Investigation of a New Wearable Frontal Lobe EEG Device. Sensors 2022, 22, 1898. [Google Scholar] [CrossRef] [PubMed]
  14. Katsigiannis, S.; Ramzan, N. DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals from Wireless Low-Cost Off-the-Shelf Devices. IEEE J. Biomed. Health Inform. 2018, 22, 98–107. [Google Scholar] [CrossRef] [PubMed]
  15. Tecnologico de Monterrey. Muse 2 Headband Specifications. 2022. Available online: https://ifelldh.tec.mx/sites/g/files/vgjovo1101/files/Muse_2_Specifications.pdf (accessed on 13 June 2024).
  16. Joe, C. Muse 2 EEG Device in TouchDesigner. YouTube. 2023. Available online: https://www.youtube.com/watch?v=Br0JXvuzWEI (accessed on 29 January 2025).
  17. Nowakowski, T. See What Your Brain Does When You Look at Art. Smithsonian Magazine, 15 November 2023. Available online: https://www.smithsonianmag.com/smart-news/this-headset-shows-you-what-your-brainwaves-do-when-you-look-at-art-180983261/ (accessed on 13 June 2024).
  18. Liao, L.D.; Lin, C.T.; McDowell, K.; Wickenden, A.E.; Gramann, K.; Jung, T.P.; Ko, L.W.; Chang, J.Y. Biosensor Technologies for Augmented Brain–Computer Interfaces in the Next Decades. Proc. IEEE 2012, 100, 1553–1567. [Google Scholar] [CrossRef]
  19. Suhaimi, N.S.; Mountstephens, J.; Teo, J. EEG-Based Emotion Recognition: A State-of-the-Art Review of Current Trends and Opportunities. Comput. Intell. Neurosci. 2020, 2020, 8875426. [Google Scholar] [CrossRef] [PubMed]
  20. Yang, H.-R.; Park, J.-E.; Choi, S.; Sohn, J.-H.; Lee, J.-M. EEG Asymmetry and Anxiety. In Proceedings of the 2013 International Winter Workshop on Brain-Computer Interface (BCI), Gangwon Province, Republic of Korea, 18–20 February 2013. [Google Scholar] [CrossRef]
  21. Muse. Muse S Starter Guide. Available online: https://www.choosemuse.com (accessed on 13 June 2024).
  22. Shelansky, I. Denoising Sensors in TouchDesigner. Ian Shelanskey’s Blog. Available online: https://ianshelanskey.com/2019/11/14/denoising-sensors-in-touchdesigner/ (accessed on 13 June 2024).
  23. Schmeder, A.; Freed, A. Implementation and Applications of Open Sound Control Timestamps; ICMC: Geneva, Switzerland, 2008. [Google Scholar]
  24. Bitbrain Team. How Deep Learning is Changing Machine Learning AI in EEG Data Processing. Bitbrain.com, 23 April 2020. Available online: https://www.bitbrain.com/blog/ai-eeg-data-processing (accessed on 13 June 2024).
  25. Gavriluk, V. How to Use Stable Diffusion? AI-Generated Images. Arounda. 2023. Available online: https://arounda.agency/blog/how-to-use-stable-diffusion-ai-generated-images (accessed on 13 June 2024).
  26. Sorkhabi, E. DIY Stable Diffusion API ↔ TouchDesigner. Available online: https://derivative.ca/community-post/tutorial/diy-stable-diffusion-api-↔-touchdesigner/67525 (accessed on 13 June 2024).
  27. Tschepe, B. Audioreactive Graffiti—TouchDesigner X Streamdiffusion Tutorial 1 for Intermediate. Derivative.ca. Available online: https://derivative.ca/community-post/tutorial/audioreactive-graffiti-–-touchdesigner-x-streamdiffusion-tutorial-1/68987 (accessed on 13 June 2024).
  28. Epstein, Z.; Hertzmann, A.; Investigators of Human Creativity; Akten, M.; Farid, H.; Fjeld, J.; Frank, M.R.; Groh, M.; Herman, L.; Leach, N.; et al. Art and the Science of Generative AI: A Deeper Dive. 2023. Available online: https://arxiv.org/pdf/2306.04141 (accessed on 13 June 2024).
  29. Derivative. CHOP, TouchDesigner99 Documentation. Available online: https://docs.derivative.ca/CHOP (accessed on 4 June 2024).
  30. Woaswi, W.; Hanif, M.; Mohamed, S.; Hamzah, N.; Rizman, Z. Human Emotion Detection via Brain Waves Study by Using Electroencephalogram (EEG). Int. J. Adv. Sci. Eng. Inf. Technol. 2016, 6, 1005. [Google Scholar] [CrossRef]
  31. Stone, J.; Hughes, J. Early History of Electroencephalography and Establishment of the American Clinical Neurophysiology Society. J. Clin. Neurophysiol. 2013, 30, 28–44. [Google Scholar] [CrossRef] [PubMed]
  32. Bulut, S. The Brain-Computer Interface. In Proceedings of the International Conference on Technics, Technologies and Education ICTTE 2019, Yambol, Bulgaria, 16–18 October 2019; pp. 133–138. [Google Scholar] [CrossRef]
  33. Kawala-Sterniuk, A.; Browarska, N.; Al-Bakri, A. Summary of over Fifty Years with Brain-Computer Interfaces—A Review. Brain Sci. 2021, 11, 43. [Google Scholar] [CrossRef] [PubMed]
  34. Vidal, J.J. Toward Direct Brain-Computer Communication. Annu. Rev. Biophys. Bioeng. 1973, 2, 157–180. [Google Scholar] [CrossRef] [PubMed]
  35. Mullin, E. This Man Set the Record for Wearing a Brain-Computer Interface. Wired, 17 August 2022. Available online: https://www.wired.com/story/this-man-set-the-record-for-wearing-a-brain-computer-interface/ (accessed on 13 June 2024).
  36. Bush, V. As We May Think. 1945. Available online: http://archive.org/details/as-we-may-think (accessed on 13 June 2024).
  37. Copeland, B.J. Alan Turing and the Beginning of AI. Encyclopaedia Britannica, 29 January 2025. Available online: https://www.britannica.com/technology/artificial-intelligence/Alan-Turing-and-the-beginning-of-AI (accessed on 13 June 2024).
  38. McCorduck, P. Aaron’s Code: Meta-Art, Artificial Intelligence, and the Work of Harold Cohen; W.H. Freeman: New York, NY, USA, 1991; Available online: https://books.google.ro/books?id=r3UyBgAAQBAJ (accessed on 13 June 2024).
  39. Greene, T. 2010–2019: The Rise of Deep Learning. The Next Web, 2 January 2020. Available online: https://thenextweb.com/news/2010-2019-the-rise-of-deep-learning (accessed on 13 June 2024).
  40. Manovich, L. Cultural Analytics; MIT Press: Cambridge, MA, USA, 2020; Available online: https://books.google.ro/books?id=pIv-DwAAQBAJ (accessed on 13 June 2024).
  41. Riccio, P.; Bergaust, K.; Christensen-Scheel, B.; Martin, J.-C.; Zuluaga, M.; Nichele, S. AI-Based Artistic Representation of Emotions from EEG Signals: A Discussion on Fairness, Inclusion, and Aesthetics. arXiv 2022, arXiv:2202.03246. [Google Scholar] [CrossRef]
  42. Kaminer, A. Brain Waves Lift Me Higher. The New York Times, 22 June 2012. Available online: https://www.nytimes.com/2012/06/24/fashion/the-ascent-levitating-in-brooklyn.html (accessed on 13 June 2024).
  43. McLuhan, M.; Gordon, W.T. Understanding Media: The Extensions of Man; Gingko Press: Berkeley, CA, USA, 2003; Available online: https://books.google.ro/books?id=m7poAAAAIAAJ (accessed on 13 June 2024).
  44. Tairov, I.; Stefanova, N.; Aleksandrova, A.; Aleksandrov, M. Review of AI-Driven Solutions in Business Value and Operational Efficiency. Econ. Ecol. Socium 2024, 8, 55–66. [Google Scholar] [CrossRef]
  45. Makurin, A. Technological Aspects and Environmental Consequences of Mining Encryption. Econ. Ecol. Socium 2023, 7, 61–70. [Google Scholar] [CrossRef]
  46. Kaminsky, O.; Koval, V.; Yereshko, J.; Vdovenko, N.; Bocharov, M.; Kazancoglu, Y. Evaluating the Effectiveness of Enterprises’ Digital Transformation by Fuzzy Logic. In Advances in Soft Computing Applications; River: Aalborg, Denmark, 2023; pp. 75–90. [Google Scholar]
  47. Demianchuk, M.; Koval, V.; Hordopolov, V.; Kozlovtseva, V.; Atstaja, D. Ensuring Sustainable Development of Enterprises in the Conditions of Digital Transformations. E3S Web Conf. 2021, 280, 02002. [Google Scholar] [CrossRef]
  48. Koval, V.; Kremenetskaya, Y.; Markov, S. Promising Green Telecommunications Based on Hybrid Network Architecture. In Proceedings of the 2019 International Conference on Information and Telecommunication Technologies and Radio Electronics (UkrMiCo), Odessa, Ukraine, 9–13 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar] [CrossRef]
  49. Anadol, R. About Refik Anadol. Available online: https://refikanadol.com/refik-anadol/ (accessed on 4 June 2024).
  50. Anadol, R. Art in the Age of Machine Intelligence. TED. 2020. Available online: http://www.marilenabeltramini.it/learning-together-2122/UserFiles/Admin_teacher/art_in_the_age_of_machine_intelligence.pdf (accessed on 4 June 2024).
  51. Anadol, R. Melting Memories. Available online: https://refikanadol.com/works/melting-memories/ (accessed on 4 June 2024).
  52. Guljajeva, V.; Canet Sola, M. Interactive NeuroKnitting: Knitting with Your Brain. In Proceedings of the 16th International Symposium on Visual Information Communication and Interaction (VINCI ’23), 22–24 September 2013; New York, NY, USA; p. 49. [Google Scholar] [CrossRef]
  53. Xu, S.; Wang, Z. Diffusion: Emotional Visualization Based on Biofeedback Control by EEG. Artnodes 2021, 28, 1–11. [Google Scholar] [CrossRef]
  54. Zhou, T.; Chen, X.; Shen, Y.; Nieuwoudt, M.; Pun, C.M.; Wang, S. Generative AI Enables EEG Data Augmentation for Alzheimer’s Disease Detection via Diffusion Model. In Proceedings of the 2023 IEEE International Symposium on Product Compliance Engineering-Asia (ISPCE-ASIA), Shanghai, China, 3–5 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
Figure 1. Muse headset used in this study; photograph taken at the exhibition opening.
Figure 1. Muse headset used in this study; photograph taken at the exhibition opening.
Designs 09 00016 g001
Figure 2. Touchdesigner99 interface with the signal from the Muse headband.
Figure 2. Touchdesigner99 interface with the signal from the Muse headband.
Designs 09 00016 g002
Figure 3. Touchdesigner99 interface with EEG data on the left, a circle used as a base image in the middle, and Stream Diffusion Tox on the right with the resulting AI-generated image.
Figure 3. Touchdesigner99 interface with EEG data on the left, a circle used as a base image in the middle, and Stream Diffusion Tox on the right with the resulting AI-generated image.
Designs 09 00016 g003
Figure 4. Touchdesigner99 interface with EEG data and StreamDiffusion Tox.
Figure 4. Touchdesigner99 interface with EEG data and StreamDiffusion Tox.
Designs 09 00016 g004
Figure 5. Touchdesigner99 interface with StreamDiffusion Tox parameters.
Figure 5. Touchdesigner99 interface with StreamDiffusion Tox parameters.
Designs 09 00016 g005
Figure 6. Rendering of the setup that was used to display the system.
Figure 6. Rendering of the setup that was used to display the system.
Designs 09 00016 g006
Figure 7. Touchdesigner99 interface with the AI-generated flowers and the data.
Figure 7. Touchdesigner99 interface with the AI-generated flowers and the data.
Designs 09 00016 g007
Figure 8. Touchdesigner99 interface with the EEG data.
Figure 8. Touchdesigner99 interface with the EEG data.
Designs 09 00016 g008
Figure 9. The pipeline for EEG image generation described in 2022 [41].
Figure 9. The pipeline for EEG image generation described in 2022 [41].
Designs 09 00016 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Puiac, A.V.; Cioca, L.-I.; Lakatos, G.D.; Groza, A. Real-Time Electroencephalogram Data Visualization Using Generative AI Art. Designs 2025, 9, 16. https://doi.org/10.3390/designs9010016

AMA Style

Puiac AV, Cioca L-I, Lakatos GD, Groza A. Real-Time Electroencephalogram Data Visualization Using Generative AI Art. Designs. 2025; 9(1):16. https://doi.org/10.3390/designs9010016

Chicago/Turabian Style

Puiac, Andrei Virgil, Lucian-Ionel Cioca, Gheorghe Daniel Lakatos, and Adrian Groza. 2025. "Real-Time Electroencephalogram Data Visualization Using Generative AI Art" Designs 9, no. 1: 16. https://doi.org/10.3390/designs9010016

APA Style

Puiac, A. V., Cioca, L.-I., Lakatos, G. D., & Groza, A. (2025). Real-Time Electroencephalogram Data Visualization Using Generative AI Art. Designs, 9(1), 16. https://doi.org/10.3390/designs9010016

Article Metrics

Back to TopTop