Top Menu

Poster Guidelines

INSTRUCTIONS FOR E-POSTER PRESENTATION

Meeting attendees should not take photographs or video of any slides, posters, or other presentations of data that are being presented. In an effort to be clear about the photography policy, we are asking all presenters to include in the bottom corner of each slide an icon depicting no photography is allowed. Click on the link below to download the icon:
No Photography Icon

e-Poster Upload

All poster presenters are required to have their e-Posters uploaded to the site by February 5. *Note: It takes up to 24 hours after uploading the e-poster for it to show in the gallery. This includes last minute uploads during the live meeting.

The e-poster site offers presenters the opportunity to share their work online using high quality graphics and allows meeting attendees to browse through a gallery of posters that are being presented. The e-poster site has been developed with security in mind, protecting against downloading, copying, or printing of your work. E-poster viewing will only be accessible to registered 2021 midwinter meeting attendees. 

To begin your e-Poster upload, log in to the SUBMISSIONS DASHBOARD. After logging in, click the following button to begin the process:

Registered ARO attendees will be able to view the e-posters prior to the meeting and viewing will remain open until 30 days post-meeting.

 

Poster Presenter Instructions: 

  1. Upload your e-poster to the e-poster gallery no later than Friday, February 5, 2021 using the links and instructions below. (*REQUIRED) a. Available file types to upload include: PDF, PNG, JPG, JPEG, or GIF. Images will be interactive and allow the user to zoom in/out, digital images are preferred. *PDF’s are allowed but loose resolution when enlarged (blurred) and is not recommended. Maximum file size is 8 MB.
  1. Upload an audio recording of a 1-3-minute presentation summary of your poster (*RECOMMENDED). a. This must be in a .mp3 or .AAV file format and cannot exceed 8 MB
  2. To audio record your presentation, you can utilize an online audio recording site (example: https://online-voice-recorder.com/) or use a voice recording app on your smart phone then upload the file.
  3. For instructions on setting up a free personal meeting room in Zoom, please see below information.

If an audio file is uploaded, when attendees view your e-poster they will be able to listen to your audio presentation.

  1. Upload a transcript (written version of your audio recording) of your poster. (*REQUIRED)
  2. Provide a web-link to join your personal meeting room. ARO strongly recommends Google Meets for accessibility in providing automatic captioning.  Other acceptable methods of video conferencing that provide automatic captioning include paid Zoom business/institutional accounts (captioning must be enabled) or Zoom integrated with Otterai.  Please be sure to test your link, and test your microphone, camera, and captioning before saving and submitting. It must be a clickable URL link.
    1. PLEASE NOTE: Microsoft Teams will not be accepted as it does not allow captions to be transmitted across institutions.  For instance, if you are hosting using a Teams account from your university, only people from your university will be able to see the captions.  
  3. Provide an email address where you can be reached at in case of connectivity issues.
  4. For tips and tricks for preparing for your presentation, please click here. 

Creating a Google Meet

  • Use one of the following web browsers that supports Google Meet: Chrome Browser, Mozilla Firefox, Microsoft Edge, or Apple Safari.  Make sure that you have a current version.
  • Go to https://meet.google.com/
  • Click on “Start a meeting.” See image below.

  • Sign in to your institutional or personal Google account. If you do not have an account, click “Create account”. To create an account, fill in your first and last name, your email address (or select to create a Gmail address), and then type your (new) password twice.

  • After you sign in, click on “Join or start a meeting”.
  • Leave the nickname blank then click continue. Alternatively, you can schedule the meeting using Google calendar or Outlook. Click on “Learn how to schedule a meeting” to do so.

  • Click “Join now”.
  • Copy the meeting information by clicking on “Copy joining info”

  • Paste this link into the appropriate field in the poster presenter access portal.
  • When it is time for you to present your poster, use the code for your meeting to join the meeting.
  • When you join, if your camera is working, you will see a video of yourself.
  • Along the bottom of the screen, there will be a menu that includes options to view Meeting Details (another place where you can copy and share joining information) at the bottom left, and raise your hand, turn on captions, or present now at the bottom right.
  • Next, test your microphone and audio. At the far bottom right of the screen, there are three dots that you can click to see other options.  Click on the Settings option.  You will see a screen that allows you to view your microphone output and test the speaker/headphone audio.  To test the microphone, speak and see the dots increase in size. To test the speakers/headphone, click the test button and you should hear a ringing sound.

  • Next, test your captions. Click turn on captions and speak.  Captions should appear at the bottom of the screen. 
  • To present, click “Present now”.
  • You can choose whether to share your whole screen or just the window with your poster in it.

Troubleshooting: 

  • I’m able to see myself on Google Meet, but can’t see others.
    • Make sure that you are not behind an institutional firewall. If you are at an institution with strict firewalls, like a hospital, disconnect from the institution’s network and switch to a public or guest network.  If you are home, disconnect from VPN which also acts as a firewall.
    • A similar fix applies for others joining your meeting, if they can’t see or hear others.

If you have questions regarding your presentation, please contact ARO Headquarters at headquarters@aro.org

 

 

 

Hearing loss can significantly disrupt the ability of children to become mainstreamed in educational environments that emphasize spoken language as a primary means of communication. Similarly, adults who lose their hearing after communicating using spoken language have numerous challenges understanding speech and integrating into social situations. These challenges are particularly significant in noisy situations, where multiple sound sources often arrive at the ears from various directions. Intervention with hearing aids and/or cochlear implants (CIs) has proven to be highly successful for restoring some aspects of communication, including speech understanding and language acquisition. However, there is also typically a notable gap in outcomes relative to normal-hearing listeners. Importantly, auditory abilities operate in the context of how hearing integrates with other senses. Notably, the visual system is tightly couples to the auditory system. Vision is known to impact auditory perception and neural mechanisms in vision and audition are tightly coupled, thus, in order to understand how we hear and how CIs affect auditory perception we must consider the integrative effects across these senses.

We start with Rebecca Alexander, a compelling public speaker who has been living with Usher’s Syndrome, a genetic disorder found in tens of thousands of people, causing both deafness and blindness in humans. Ms. Alexander will be introduced by Dr. Jeffrey Holt, who studies gene therapy strategies for hearing restoration. The symposium then highlights the work of scientists working across these areas. Here we integrate psychophysics, clinical research, and biological approaches, aiming to gain a coherent understanding of how we might ultimately improve outcomes in patients. Drs. Susana Martinez-Conde and Stephen Macknik are new to the ARO community, and will discuss neurobiology of the visual system as it relates to visual prostheses. Dr. Jennifer Groh’s work will then discuss multi-sensory processing and how it is that vision helps us hear. Having set the stage for thinking about the role of vision in a multisensory auditory world, we will hear from experts in the area of cochlear implants. Dr. René H Gifford will discuss recent work on electric-acoustic integration in children and adults, and Dr. Sharon Cushing will discuss her work as a clinician on 3-D auditory and vestibular effects. Dr. Matthew Winn will talk about cognitive load and listening effort using pupillometry, and we will end with Dr. Rob Shepherd’s discussion of current work and future possibilities involving biological treatments and neural prostheses. Together, these presentations are designed to provide a broad and interdisciplinary view of the impact of sensory restoration in hearing, vision and balance, and the potential for future approaches for improving the lives of patients.

Kirupa Suthakar, PhD - Dr Kirupa Suthakar is a postdoctoral fellow at NIH/NIDCD, having formerly trained as a postdoctoral fellow at Massachusetts Eye and Ear/Harvard Medical School and doctoral student at Garvan Institute of Medical Research/UNSW Australia.  Kirupa's interest in the mind and particular fascination by how we are able to perceive the world around us led her to pursue a research career in auditory neuroscience.  To date, Kirupa's research has broadly focused on neurons within the auditory efferent circuit, which allow the brain to modulate incoming sound signals at the ear.  Kirupa is active member of the spARO community, serving as the Chair Elect for 2021.

 

 

I began studying the vestibular system during my dissertation research at the Università di Pavia with Professors Ivo Prigioni and GianCarlo Russo. I had two postdoctoral fellowships, first at the University of Rochester with Professor Christopher Holt and then at the University of Illinois at Chicago with Professors Jonathan Art and Jay Goldberg.

My research focuses on characterizing the biophysics of synaptic transmission between hair cells and primary afferents in the vestibular system. For many years an outstanding question in vestibular physiology was how the transduction current in the type I hair cell was sufficient, in the face of large conductances on at rest, to depolarize it to potentials necessary for conventional synaptic transmission with its unique afferent calyx.

In collaboration with Dr. Art, I overcame the technical challenges of simultaneously recording from type I hair cells and their enveloping calyx afferent to investigate this question. I was able to show that with depolarization of either hair cell or afferent, potassium ions accumulating in the cleft depolarize the synaptic partner. Conclusions from these studies are that due to the extended apposition between type I hair cell and its afferent, there are three modes of communication across the synapse. The slowest mode of transmission reflects the dynamic changes in potassium ion concentration in the cleft which follow the integral of the ongoing hair cell transduction current. The intermediate mode of transmission is indirectly a result of this potassium elevation which serves as the mechanism by which the hair cell potential is depolarized to levels necessary for calcium influx and the vesicle fusion typical of glutamatergic quanta. This increase in potassium concentration also depolarizes the afferent to potentials that allow the quantal EPSPs to trigger action potentials. The third and most rapid mode of transmission like the slow mode of transmission is bidirectional, and a current flowing out of either hair cell or afferent into the synaptic cleft will divide between a fraction flowing out into the bath, and a fraction flowing across the cleft into its synaptic partner.

The technical achievement of the dual electrode approach has enabled us to identify new facets of vestibular end organ synaptic physiology that in turn raise new questions and challenges for our field. I look forward with great excitement to the next chapter in my scientific story.

 

Charles C. Della Santina, PhD MD is a Professor of Otolaryngology – Head & Neck Surgery and Biomedical Engineering at the Johns Hopkins University School of Medicine, where he directs the Johns Hopkins Cochlear Implant Center and the Johns Hopkins Vestibular NeuroEngineering Laboratory.

As a practicing neurotologic surgeon, Dr. Della Santina specializes in treatment of middle ear, inner ear and auditory/vestibular nerve disorders. His clinical interests include restoration of hearing via cochlear implantation and management of patients who suffer from vestibular disorders, with a particular focus on helping individuals disabled by chronic postural instability and unsteady vision after bilateral loss of vestibular sensation. His laboratory’s research centers on basic and applied research supporting development of vestibular implants, which are medical devices intended to partially restore inner ear sensation of head movement. In addition to that work, his >90 publications include studies characterizing inner ear physiology and anatomy; describing novel clinical tests of vestibular function; and clarifying the effects of cochlear implantation, vestibular implantation, superior canal dehiscence syndrome and intratympanic gentamicin therapy on the inner ear and central nervous system.  Dr. Della Santina is also the founder and CEO/Chief Scientific Officer of Labyrinth Devices LLC, a company dedicated to bringing novel vestibular testing and implant technology into routine clinical care.

Andrew Griffith received his MD and PhD in Molecular Biophysics and Biochemistry from Yale University in 1992. He completed his general surgery internship and a residency in Otolaryngology-Head and Neck Surgery at the University of Michigan in 1998. He also completed a postdoctoral research fellowship in the Department of Human Genetics as part of his training at the University of Michigan. In 1998, he joined the Division of Intramural Research (DIR) in the National Institute on Deafness and Other Communication Disorders (NIDCD). He served as a senior investigator, the chief of the Molecular Biology and Genetics Section, the chief of the Otolaryngology Branch, and the director of the DIR, as well as the deputy director for Intramural Clinical Research across the NIH Intramural Research Program. His research program identifies and characterizes molecular and cellular mechanisms of normal and disordered hearing and balance in humans and mouse models. Two primary interests of his program have been hearing loss associated with enlargement of the vestibular aqueduct, and the function of TMC genes and proteins. The latter work lead to the discovery that the deafness gene product TMC1 is a component of the hair cell sensory transduction channel. Since July of 2020, he has served as the Senior Associate Dean of Research and a Professor of Otolaryngology and Physiology in the College of Medicine at the University of Tennessee Health Science Center.

Gwenaëlle S. G. Géléoc obtained a PhD in Sensory Neurobiology from the University of Sciences in Montpellier (France) in 1996. She performed part of her PhD training at the University of Sussex, UK where she characterized sensory transduction in vestibular hair cells and a performed a comparative study between vestibular and cochlear hair cells. Gwenaelle continued her training as an electrophysiologist at University College London studying outer hair cell motility and at Harvard Medical School studying modulation of mechanotransduction in vestibular hair cells. As an independent investigator at the University of Virginia, she expanded this work and characterized the developmental acquisition of sensory transduction in mouse vestibular hair cells, the developmental acquisition of voltage-sensitive conductances in vestibular hair cells and the tonotopic gradient in the acquisition of sensory transduction in the mouse cochlea. This work along with quantitative spatio-temporal studies performed on several hair cell mechanotransduction candidates lead her to TMC1 and 2 and long-term collaborations with Andrew Griffith and Jeff Holt. Dr. Géléoc is currently Assistant Professor of Otolaryngology, at Boston Children’s Hospital where she continues to study molecular players involved in the development and function of hair cells of the inner ear and develops new therapies for the treatment of deafness and balance, with a particular focus on Usher syndrome.

Jeff Holt earned a doctorate from the Department of Physiology at the University of Rochester in 1995 for his studies of inward rectifier potassium channels in saccular hair cells.  He went on to a post-doctoral position in the Neurobiology Department at Harvard Medical School and the Howard Hughes Medical Institute, where he characterized sensory transduction and adaptation in hair cells and developed a viral vector system to transfect cultured hair cells.  Dr. Holt’s first faculty position was in the Neuroscience Department at the University of Virginia.  In 2011 the lab moved to Boston Children’s Hospital / Harvard Medical School.  Dr. Holt is currently a Professor in the Departments of Otolaryngology and Neurology in the F.M. Kirby Neurobiology Center.  Dr. Holt and his team have been studying sensory transduction in auditory and vestibular hair cells over the past 20 years, with particular focus on TMC1 and TMC2 over the past 12 years.  This work lead to the discovery that TMC1 forms the hair cell transduction channel.  His work also focuses on development gene therapy strategies for genetic hearing loss.