<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Projects | Andrew William Curran</title><link>https://andrewwilliamcurran.com/project/</link><atom:link href="https://andrewwilliamcurran.com/project/index.xml" rel="self" type="application/rss+xml"/><description>Projects</description><generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language><lastBuildDate>Tue, 01 Sep 2020 00:00:00 +0000</lastBuildDate><item><title>Age-dependent effects of unilateral hearing loss</title><link>https://andrewwilliamcurran.com/project/phd-thesis/</link><pubDate>Tue, 01 Sep 2020 00:00:00 +0000</pubDate><guid>https://andrewwilliamcurran.com/project/phd-thesis/</guid><description>&lt;p>Excerpt from the Abstract:&lt;/p>
&lt;p>Subjects with single-sided deafness (SSD) increasingly undergo cochlear implant surgery (SSD-CI users) to obtain binaural hearing benefits, such as directional hearing. However, asymmetric hearing may disrupt the hemispheric balance of central auditory circuits that support spatial hearing. In an animal model of unilateral deafness, we test whether binaural integration of auditory midbrain neurons is differently impacted by unilateral hearing loss during development or during adulthood.&lt;/p>
&lt;p>Single unit responses were recorded in the inferior colliculus of Mongolian gerbils. First, basic response parameters to unilateral and bilateral acoustic stimuli (pure tones and clicks) were characterized in normal hearing (NH) juveniles (postnatal days P15-23) to identify critical periods for binaural integration. Second, juvenile (P15, P19, P23) and adult (~P65) animals were unilaterally deafened. After 15 days of deafness, animals received bilateral round window electrodes, and responses to monaural and binaural biphasic electric pulses were recorded. NH adult gerbils served as controls. Parameters of binaural integration included: interaural frequency receptive field alignment, interaural response strength dominance, and interaural time difference (ITD) coding.&lt;/p>
&lt;p>In hearing animals, mature neuronal responsiveness to transient click stimuli was not observed until around P19. Although adult-like neural discrimination thresholds were observed at P21, the overall incidence of ITD sensitive neurons did not reach mature-like levels until P23. However, interaural receptive field alignments were still immature at this age.&lt;/p>
&lt;p>In SSD animals, age-dependent degradations in ITD coding were observed. SSD-onset, before the closure of the critical period for ITD coding, resulted in a severe reduction in incidence and sensitivity of ITD coding neurons and increases in ITD discrimination thresholds. These degradations were independent of the hemisphere relative to the deafened ear. In contrast, later SSD-onset led to asymmetric degradations in ITD sensitivity and ITD discrimination thresholds that were particularly pronounced in the hemisphere contralateral to the deafened ear.&lt;/p>
&lt;p>Results show that neural ITD coding is vulnerable to asymmetric hearing loss both during development and during adulthood. Nevertheless, age at deafness onset differently affects neural ITD processing. Whereas early-onset SSD resulted in symmetric deficits in ITD coding, SSD-onset after the closure of the critical period for ITD coding was associated with hemisphere-specific degradations. These results suggest that differential intervention strategies are required to optimize directional hearing in juvenile and adult SSD-CI users.&lt;/p></description></item><item><title>Optimization of ABR and eABR methods</title><link>https://andrewwilliamcurran.com/project/masters-thesis/</link><pubDate>Tue, 01 Sep 2020 00:00:00 +0000</pubDate><guid>https://andrewwilliamcurran.com/project/masters-thesis/</guid><description>&lt;p>Excerpt from the Abstract:&lt;/p>
&lt;p>The Electrically-evoked Auditory Brainstem Response (eABR) and Acoustic Brainstem Response (ABR) are two related techniques clinically used for objective evaluation on both cochlear implant (CI) patients and patients with suspected hearing loss respectively. The Sentiero device, from Path Medical GmbH, is capable of receiving and processing ABR responses, and is tested for potential to obtain eABR as well. Methods of ABR analysis, such as Wavelet Analysis, Single Point Scattering (SPS), and muscle artifact rejection were examined, as well as methods of Signal-to-Noise Ratio (SNR) estimation.
Eight ABR subjects of normal hearing capability and one CI patient were measured using ABR and eABR respectively. The data was uploaded to Matlab and then filtered, averaged, windowed, and subjected to a suite of
analysis techniques. Wavelet analysis failed to provide additional information about ABR although results correlated strongly with accepted techniques. SPS was able to successfully determine statistical significance for specific waveforms in both ABR and eABR cases. Muscle artifact rejection was deemed unnecessary, although of potential use for toddler and infant ABR testing. SNR estimation was found to approximately follow a square root trend at relatively high levels of signal power.
As a proof of concept, eABR measurements were successfully obtained from the CI patient. Furthermore, an electrode connection was developed to allow simultaneous, parallel measurement between two EEG recording devices and successfully used to obtain eABR as well, demonstrating parallel capability for improved comparison between differing EEG devices.&lt;/p>
&lt;p>&lt;a href="https://andrewwilliamcurran.com/files/mscSlides.pptx">Final Colloquium Presentation Slides&lt;/a>&lt;/p>
&lt;p>Thesis Document available upon request.&lt;/p></description></item><item><title>Hear What I Hear</title><link>https://andrewwilliamcurran.com/project/hear-what-i-hear/</link><pubDate>Sun, 06 May 2018 00:00:00 +0000</pubDate><guid>https://andrewwilliamcurran.com/project/hear-what-i-hear/</guid><description>&lt;p>Recently, research groups have shown that it is possible to read images from a person&amp;rsquo;s brain as they are seen using both fMRI and EEG through machine learning strategies. These strategies have however not yet been applied to auditory stimuli. It should theoretically also be possible to record imagined visual or auditory perceptions. This project aimed to utilize the same principles to extract simple musical details through EEG, and potentially even imagined music.&lt;/p>
&lt;p>I had several volunteers wear a 64 channel EEG helmet and listen to several stimuli, before imagining that stimuli in their head separetly. With this dataset, we may eventually be able to find some common data that will allow us to predict the imagined sound in the head, by using machine learning or neural networks.&lt;/p>
&lt;p>In its current status, only about 25% correct classification was achieved amongst 5 sounds. Not great, admittedly, but better than chance. There is potential.&lt;/p>
&lt;p>Anyone interested in collaborating on this project should contact me.&lt;/p></description></item></channel></rss>