Emotions function to optimize adaptive responses to biologically significant events. In the auditory channel, humans are highly attuned to emotional signals in speech and music that arise from shifts in the frequency spectrum, intensity, and rate of acoustic information.
This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing.
It seems qualitative measurements of subjective reactions are not appropriate indicators to assess the effect of noise on cognitive performance.
Most studies so far focused on making EEG-based BCI spellers faster and more reliable; however, few have considered their security. This study, for the first time, shows that P300 and steady-state visual evoked potential BCI spellers are very vulnerable, i.e. they can be severely attacked by adversarial perturbations, which are too tiny to be noticed when added to EEG signals, but can mislead the spellers to spell anything the attacker wants.
Picture a scene at a crowded cocktail party, a club with live music, a sporting event or just a loud room with a lot going on. If someone speaks, it can be very difficult to decipher what is being said, given all the ambient noise in the immediate area. While initially the reaction might be “HUH? What did you say?” rest assured the brain is on the job.