Bionic Deepa





Saturday, 15 November 2003
I couldn't comprehend any spoken words in 'Master and Commander: The Far Side of the World'. I admit I was expecting to catch at least few words. Anyway, my lipreading skill seem to have improved with the added hearing. When I didn't fully understood what Phil was saying, he then noticed I wasn't wearing the processor. With the processor on, he then repeated the sentence "Why are you sitting in the dark?" and interestingly, I understood the last word with 'k' in the end, which would have otherwise been invisible if just by lipreading.

Friday, 14 November 2003
I like the sound of writing with pencil, so nice that I kept on scribbling :-)

Thursday, 13 November 2003
I requested Phil to repeat what he was saying to me. He'll repeat if I close my eyes (so that I will instead concentrate on hearing instead of lipreading). I repeated "britney", imitating the sound but didn't recognized the word. And I was correct! Britney Spears.

Earlier in the morning at Farmers Market, I was suprised to hear how vocable the spoken words were around me. Of course, not yet comprehend-able.

Friday, 31 October 2003
It is awesome hearing myself breathing. So rhythmic. I guess it meant the environment is quiet. Good! Thanks to Colleen my audiologist, my processors were modified yesterday with new mappings. I am currently on the quietest setting at 60 IDR in paired stimulation. The microphone is set to 30% while t-mic is set to 70%. It is still hard to believe that my noisy fridge is no longer bothering me. In fact, the environmental noises sounds distanced. Wonderful!

Thursday, 30 October 2003
I love my new behind-the-ear processor named Auria... especially with t-mic attached. My voice sounded clearer after all the t-mic situated right in front of the ear, picks up sounds in front rather than from the back.

Auria also feels light and free... no more long cable running from the head to the body-worn processor. N
o more complications while visiting rest-rooms :-)

When we had finished the programming session earlier in the afternoon with Colleen my audiologist at UCSF, I had 3 different processing programs on my new processor. I asked for both sequential (one electrode position at a time is activated) and paired (which fires 2 electrodes at a time) pulses because I have not yet decided which sounds better. As for my BWP (body-worn processor), I again ask for both paired and sequential pulses for plugging to CD-player – so that I can compare which sounds better while track-reading books.

Program 1 was set to 80 IDR (input dynamic range) in sequential stimulation strategy. Program 2 also set to 80 IDR but in paired stimulation strategy (which fires 2 electrodes at a time compared to sequentially). Program 3 is thoughtfully set to 60 IDR, paired – to tolerate noisy environmental noises... so tolerable that my noisy fridge is no longer bothering me! 60 IDR is actually the default setting to accurately process the decibel range from 40-100 dB. With the 80 IDR, it is 20-100 dB. Above & below that range is either compressed or clipped.

In the meantime, I am thoroughly enjoying t-mic sound quality because the environment noises seems to have disappeared. Or in other words, they are not getting in the way whenever Phil or I speak. It was so nice hearing my voice clearly (program 2, paired) while driving back across the bridge and that on a noisy freeway. It is as if I am speaking directly to the mic. Or as if Phil is speaking directly to the mic.

The microphone is responsible for receiving the sound that is interpreted by the signal processor.  This signal processor divides the frequency spectrum up into various sub units that are then transmitted to electrode arrays in varying regions of the cochlea.  In correlation with basilar frequency interpretations, higher frequency signals are sent to microelectrodes neighboring nerve cells in the base region while the lower frequencies are sent to apex regions.

The basilar membrane is a fluid filled, snail shaped cavity located in the inner ear (cochlea) that helps separate the frequencies of sound.  Higher frequencies are processed at the base of the basilar membrane, while the lower ones are processed at the apex.  Changing frequencies produce pressure differentials in the cavity, which causes the rubbing of hairs that are organized along the membrane wall.  The bending of these hairs causes local neurons to fire, sending the signal to the brain for interpretation.  In my case, most of my hair follicles are damaged from birth, disrupting the signal from the basilar membrane to the neurons. Cochlear implant compensate by somewhat restoring this signal to the neurons, thus allowing more sounds to be heard. 

A cochlea implant functions through four basic subsystems: microphone, signal processor, transmission system and electrode array. My new HiRes Auria processor has two microphones on it. T-mic, a snap-on audio earhook option nestled close to the front of the ear. The other one utilizes the built-in situated on the top.

Friday, 17 October 2003
"Deep breath, deep breath, deep breath...", it was nice hearing the doctor while I was being examined. Of course, I can also respond correctly with just a hearing-aid. It is just that the words now sound nicer and obviously clearer... with the implant.

Friday, 3 October 2003
I phoned Phil at 5:00 p.m. to ask what time he wants to be picked up. He said a word a number of times that sounds like "six" but I was not confident enough to grasp it as a correct word, and requested him to repeat the word again and again. It sounded so soft. And anyway, I had not had phone-exercise for weeks so I definitely need regular phone-exercises.

A few minutes later... I had forgotten to plug in the phone-adaptor to my processor! No wonder I didn't hear Phil on the phone. Just shows how long ago I last used the phone!

Thursday, 2 October 2003
Phil and I lied down in a quiet bedroom at my bedtime... for an hour of conversation in darkness. Whew, it was very hard to match unknown sounds to known words. Can you match common names to strangers' faces? Anyway, after an hour, I eventually learnt "tomorrow, you telephone me" after Phil patiently tried expressing those unknown sounds into known words in different ways i.e. pointing to me, as in "you", etc.

Wednesday, 1 October 2003
Unconsciously seeing the wallet in Phil's hand, I promptly responded without looking up to lip-read, that there is money in my wallet when he was asking "Have you got any money?"

Friday, 12 September 2003
Last evening was eventful with extremely high degree of emotional excitement. Thanks to Pooja's Apple PowerBook and Sony camcorder, I had a face-to-face conversation with daddy and mummy... lipreading them and even understood the word 'parlor' (the implant was then useful when I didn't get it in lipreading). It was so emotional seeing their stupified faces at this amazing technology, of actually seeing me in spite of the vast geographical distance from San Francisco to New Delhi. We had not seen each other for one whole year. As some of you can imagine, emailing each other is not the same as face-to-face animated conversations. Remember, I am not even able to use the telephone to converse with my dear parents for many years since I first left India in 1991. I can now see that video-conferencing will become standard like email technology... yesterday Chris bought iBook and iSight for his family, to minimizing the geographical barrier with live video - while he is away from them for two months.

I am currently reading The Salmon of Doubt: Hitchhiking the Galaxy One Last Time read by Douglas Adams's four friends (most of the contents were obtained by hacking into his Macintosh computer). I find it difficult to track-read it, probably due to strong british accents? I cannot even tell which person spoke. I also cannot tell the emotions that must surely be there in their voices for their friend, Douglas Adams.

Thursday, 11 September 2003
Hello, it's Thursday and the leaf-blowers are here. No, they aren't wearing any ear-protection! Not even a covering over their noses. Of course, my implant is switched off.

Thursday, 4 September 2003
The Leaf-blowers again turned up today. I know I can switch off my implant for my right ear and my hearing-aid for my left year. Anyway, my ears are beyond repairs. But what about other neighbors? I didn't know that Blowers have been banned in Berkeley since December 1990! But
Noise Free America regularly receives e-mails from individuals who seem to love noise.

I get emails from friends mentioning of their own hearing loss... high frequencies loss. Noise-induced hearing loss commonly begins in the high frequencies around 4 kHz, regardless of the frequency content of the noise. Loud noises enters the opening of the cochlear spiral and damage the high frequency hair cells first. The low frequencies stimulate the area deep within the spiral... sound protective!

So you must protect whatever hairs left in your ears through consistently preventive methods. Electronic ear plugs may sound the best answer for today's noisy environment. Most audiologists offer soundscopes, which are custom-fitted for a patient and resemble hearing aids. Companies like Walker's Game Ear and E.A.R. Inc. also offer mass-produced versions.

The Book of Noise was originally written in 1968 as a primer for concerned citizens, to explain the dangers of noise pollution in simple language and to suggest some solutions. In updating it thirty years later, my only regret is that it still seem necessary.
- R. Murray Schafer

Tuesday, 2 September 2003
I am not attending any more audio-therapy for the rest of the year 2003. You see, I received a letter from my medical insurance warning that I have a $1,000 maximum benefit per calendar year for 'Speech Therapy'. It is a shame that they don't see the therapy as part of the Cochlear Implant package deal. This therapy is for status-post surgery, and is intended to ensure that I maximize my potential with the cochlear implant (which insurance authorized for me to recieve). Well, I'll have to manage somehow to keep exercising my hearing and hope daily track-reading will be sufficient for the time being. I am surely going to miss my enthusiastic therapist, Joy who always jump with joy at my progress.

Last week, I read Nothing Is Impossible read by Christopher Reeve, the author. His recovery is unprecedented, but Reeve and his doctors agree it is largely the result of intensive physical therapy. Five years after his accident, Reeve moved his little finger. Nothing is impossible.

Thursday, 28 August 2003
I was struggling so hard with track-reading and eventually stopped with despair. I then looked around just in case... and it's the same leaf-blower :-(

Monday, 25 August 2003
David photographed our AV iChat from Phil's work... what a fascinatingly accessible technology to seeing them working at their desks 20 miles away!

Thursday, 21 August 2003
I am so furious with the noisy environment. I was track-reading and lost track because the environmental noises were too loud to follow. So I gave up and stopped the tape when I knew I am totally lost. I closed my eyes in despair on hearing such awful noises... listening carefully and amazed how loud the nearby car-wash is. So I switched off my implant with the intention of going out to read in warm sun - to giving me a break... and noticed Leaf-blower!!!!!! I am so mad. I hate Leaf-blowers and they should be banned! I do not want them. He is still blowing (of course my implant is now off). No wonder Phil gets waken up every morning with madness and despair. It is an unbearable, torturous, senseless, and unacceptable noise. Imagine having Leaf-blowers in India!!

It is comfortable being deaf... quietly peaceful.

Monday, 15 August 2003
Battery went flat half-way while track-reading. It was funny to noticed how I panicked to switch off CD-player, being most worried about losing track. No, I don't get any warning. It just goes flat on the spot! I know I could have timed myself to swapping batteries, but rechargable batteries are generally most efficient when completely flat.

Monday, 11 August 2003
I had another AV iChat with my sister... just the audio bit and I managed to hear a couple of words... No and Home. It was an interesting experience to hear 'Nnn' and 'mmm' in words which I had never heard before the implant, even with my digital hearing-aids.

Saturday, 9 August 2003
Learning to Listen update:I am still finding it difficult to differentiate between pan/can, pea/tea, etc. nevertheless I managed to improve to 60%. Then, I was stretched to 13 sentences with a level of 70% accuracy. I easily heard one key-word (microwave, oven, dishwashing machine, toaster, etc.) in the next game 'What is that noise?' e.g. the keywords were placed in front of me while I am to hear what Joy filled in "The _______ is on". But what was interesting to note is how badly I did the moment the second key-word is added e.g. on or off...

"The _______ is on"
"The _______ is off"

Thursday, 7 August 2003
It was sooooooo good to hear Pooja's laughter over the internet, earlier this morning. She said 'Cherry', 'Beddy beddy bed', 'Bye-bye'... I recognized few words spoken, of course with the help of her typing words to guide me along. It also helps me to exercise my hearing! Thanks to iChat AV - a simply fantastic software from Apple. No, I could not see Pooja online because she does not yet have the camera on her side for video-conferencing.

And what a wonderful and a perfect tool for Deaf community to communicate in their language! Finally the technology have successfully gone beyond the awful barrier of telephone... whew! Yes, I could see clearly what Phil was signing to me, when we had a chat yesterday afternoon from his work. He even walked around the studio holding the Apple PowerBook while Chris, his work-mate held the camera... where I was able to say Hi to most of them at Complete Pandemonium. Cool! I even saw myself when the camera momentarily pointed at the monitor screen. I wish you could experience it! There must be a way of recording that...

Friday, 1 August 2003
By the way, we finally ordered SONY DSC-F717 digital camera for $771 (including 256 MB memory stick pro + shipping) from Dell with various coupons.

Track-reading... since I forgotten to bookmark (and on top of it, had not track-readed for a few day), I simply couldn't find the page to match with accompanying unabridged audio-book :-( So Poor Phil took over 30 minutes to find the page for me. I better be more responsible... you see, in the beginning I used to diligently bookmark with yellow stickies written with notes i.e. which tape, which side, track no., etc.

I am thoroughly enjoying reading Fahrenheit 451 by Ray Bradbury So good that I was reading intently before I realized that I got ahead with reading than listening, consequently out of sync a number of times... oops! Yes, I have seen the film but it is so good that I had to read the original story.

A couple of weeks ago, Pooja phoned up from Singapore to say Hi to Phil but he simply handed the phone over to me. I panicked and went blank now knowing what to say to her. Eventually I asked her what breakfast she had, with the hope that the answer will be easy to guess :-) Unfortunately not! In the bargain, she had to repeat numerous times. I tried to hear carefully and eventually trying not to laugh either. Even Phil couldn't helped laughing because he knows too well how embarrassing it can be repeating one simple word numerous time in an office and public atmosphere.

No, I didn't get it although I came close to the answer... berry, strawberry. She had cherries.

Learning to hear and trying to make sense of words such as phonology, phonemes, morphemes, syntax, segmental discriminations... People seem to presume that I can hear and converse in English language now that I have a cochlear implant. How? Although my first language is English, you must remember that hearing (spoken English) is a totally different experience from visual (written English).

"English has more than 1100 combinations of letters that are used to produce the 40 sounds of the spoken language. It becomes a problem when words share the same phoneme but spell it differently. This occurs with the "e" sound in me, tea, tree, key, country, piece, and reprise. In addition, many English words have the same letter combination but are not pronounced the same. This is the case with mint and pint, clove and love, as well as cough and bough.  By comparison, the 33 sounds used in Italian are spelled with only 25 letter combinations. Italian words are spelled just as they are pronounced. Consequentially, Italians rarely have to ask each other "how do you spell your name."  It is not surprising that English is a far more difficult language to learn.

If your language does not have some of the sounds of another language, it is usually difficult for you to hear the differences and to pronounce them correctly.  For this reason, the r and l sounds in English are difficult to distinguish for native Japanese speakers. Try making these two sounds and think about the shape of your mouth and of the placement of your tongue. They are quite similar for both sounds. Native English speakers rarely have difficulty in distinguishing the r and l sounds because they have been familiar with them from early childhood. They are experts at hearing the difference. However, English speakers have difficulty with unfamiliar sounds in other languages, such as the v and b sounds in some Spanish dialects." - Analysis of Language

While you're here, you may want to see a video-clip of the emergence of a new sign language among deaf children in Nicaragua.

Wednesday, 29 July 2003
We saw Finding Nemo film once again, but with subtitles this time. Interestingly while driving back home, while criticizing about the film... we were enthusiastically talking of Pixar short film instead! I couldn't remember the name of this film with the Snowman in, shown earlier... and asked Phil. I wrongly lipreaded him saying 'tic tac', so I was told to listen. I listened carefully and heard 'nic nac'.

Knick Knack.

Saturday, 26 July 2003
Learning to Listen update: Vowel discrimination (cane, coin, can, cone), Manner discrimination (wash/walk, road/rose) and Voice discrimination (coat/goat, bee/pea) have been mastered. They are all segmental discrimination tasks. I am currently finding Place discrimination difficult e.g. pea/tea, pickle/tickle, pan/can/tan.

Also learning to identify one word at the end of sentence i.e. "May I borrow... a soap, a comb, a tissue, a dental floss, a shampoo, a razor, a mirror. Yes, all these words are shown in front of me while Joy speaks behind a piece of paper (to prevent me from lip-reading).

Saturday, 12 July 2003
I am currently looking to buy a digital camera from a seemingly wide range of products, so we had a discussion about their pros and cons. And more importantly, whether I'll be using more wide or close-up lenses. Phil of course, took advantage of that moment to covering his mouth and to our pleasant surprise, I was able to catch a few words and even managed to have this camera discussion. Phil commented since I am skilled in lip-reading, my brain seems to process similar data for listening through the art of guesswork. When there is a context, it is always easier to guess.

The words I heard during the discussion: camera, available, price, how much?, Pentax. Of course when I do get stuck, Phil would then sign a bit before reverting to spoken language. Back and forth. The aim is to exercise my hearing without lip-reading... as much as possible.

Saturday, 6 July 2003
Preparing for my bedtime, I heard Phil saying "Brush your teeth". Yes, it is within the limitation where I was already aware of what could be said at that moment. It is also one of my first sentences I have been listening repeatedly to since January switch-on.

Joy's Learning to Listen report:
In March, Deepa was able to choose between 10 sentences with a level of 30% accuracy. Presently, she is able to discriminate between 10-13 sentences with a level of 60% accuracy. With multiple repeats, Deepa's performance often improves to 80%.

Previously Deepa was able to choose between 2 words containing a vowel difference in 80-90% of trials, so long as the vowels were highly contrasted. Deepa is now able to identify vowel differences, given a field of 5 words to choose from i.e. coin, cane, can, cone, etc. with a level of 80-90% accuracy.

Deepa has been challenged by more advanced tasks, such as identifying objects by function/description cues. Performance with these sorts of tasks is at 80%, given 4 items to choose from i.e. car, chair, bed, comb, etc. and 70% given 8 items to choose from. This goal requires ongoing remediation.

"By age one, typically, they use about three words consisting of single morphemes (such as  eat, mom, and more). By six years old, they use about 2,500 morphemes." - Learning Language

Thursday, 3 July 2003
Last night, I heard "What is your name?" when Phil asked out of blue.

Tuesday, 2 July 2003
Phil have started covering his mouth while chatting with me... oh dear. It's so hard to be patient when I promtly want to know today's gossips.

When I shut my eyes, I am actually hearing my voice and therefore consciously speaking correctly. Phil also shut his eyes to share my experience. Discussing about sounds, we realized how they are so invisible and that once heard, they are no longer there. No evidence of sounds being heard or said. Scary.

I noted how tired I easily get just from hearing. No, Lipreading is no easier. In fact, when asked which is more tiring, I don't know. But I now have no choice when Phil has started covering his mouth. Anyway, I heard Phil saying "Apple" correctly and that also without any clues. Eventually Phil became optimistically surprised by my hearing capability and predicts that I should be able to have phone-conversations with him in two years time.

Jan-Jun 2003
The first six months

Dec 2002
My cochlear implant surgery