So the team used an AI compression technique called "knowledge extraction."


Professor Xu Yifeng, director of the Brain Health Research Institute of the National Medical Center for Mental Illness and expert member of the special case community, said that special cases are different from rare disease cases,

and different from difficult and complex disease cases, the key is whether it is special enough, and whether the research can find new horizons and blind spots in basic research and clinical research, and produce new breakthroughs.

He introduced that the first batch of pilot internal collection of the program received more than 20 domestic and foreign cases, and under the premise of ensuring compliance with medical ethics regulations,

five were approved by the expert committee. One of the projects involved an autistic person with an extraordinary photographic memory. In the same batch of funding, there are also three follow-up projects, such as narcolepsy and schizophrenia co-morbidity, and early-onset temporal dementia.

"We look forward to further research to uncover the mystery of how the human brain achieves super memory, explore the possibility of cultivating more powerful brains, and explore new ways to treat brain diseases." Xu Yifeng told the first financial reporter.

Xu Yifeng introduced that the community organized an expert committee to review the cases reported by doctors every three months, including whether they have enough particularity and research value, whether they are true and in line with medical ethics.

Funding projects are divided into three levels, the amount of funding for follow-up is not more than 50,000 yuan, mainly to collect complete case data; Research grants range from RMB100,000 to RMB300,000, and in-depth research grants are above RMB300,000 to fund collaborative research by doctors and interdisciplinary scholars.

Previously, TCCI has set up the advanced Laboratory of Applied Neurotechnology and the Advanced Laboratory of Artificial Intelligence and Mental Health with Huashan Hospital and Shanghai Mental Health Center.

In collaboration with the California Institute of Technology, the TCCI Caltech Institute of Neuroscience was established.Scientists have trained a new AI intelligent system with millions of sounds to allow noise-cancelling headphones to retain human voices

We live in a noisy world. If you don't like noise, noise-cancelling headphones can reduce ambient noise, but they also filter out everything indiscriminately, so you can easily miss what you really want to hear.

Now, a new artificial intelligence system aims to solve this problem with noise-cancelling headphones.

The system, called "Target Speech Hearing," allows users to select a person to target and have their voice heard even if all other sounds are eliminated.

Although the technology is still in the proof-of-concept stage, its developers say they are in discussions with manufacturers to incorporate it into popular noise-cancelling headphones and are working to try it in hearing AIDS.

Shyam Gollakota, a professor at the University of Washington who worked on the project, said: "Listening to certain groups of people is a fundamental element of how we communicate in the world and how we interact with each other. But in certain situations, even if you don't have any hearing problems, it can become very challenging to focus on specific people."

This complexity becomes a problem when AI models need to work in real time in headsets with limited computing power and battery life.In order to meet these limitations, neural networks need to be small and consume little energy.

So the team used an AI compression technique called "knowledge extraction."

They took a large AI model trained on millions of sounds (the "teacher") and had it train a much smaller model (the "student") to mimic its behavior and performance to the same standards.

They then used ambient noise picked up by microphones on noise-cancelling headphones to train student models to extract vocal patterns (patterns) from specific sounds.

To activate the AI system, the wearer faces the target object and holds down a button on the headset for a few seconds.

During this "registration" process, the system captures audio samples through the microphone on the headset and uses this recording to extract the speaker's vocal signature, even if there are other human voices and noise nearby.

These audio features are fed into a second neural network that runs on a microcontroller computer, and the two are connected via a Universal Serial Bus (USB).This neural network runs continuously, separating the target sound from other sounds and playing it back to the headset wearer.

Once the system locks on to a speaker, it continues to prioritize that person's voice even if the wearer turns away.The more training data the system gets from the speaker's voice, the better it gets at distinguishing between sounds.


User Login

Register Account