Open Hardware Multichannel Sound Interface for Hearing AidResearch on BeagleBone Black with openMHA: Cape4allTobias Herzke1,4 and Hendrik Kayser1,2,4 and Christopher Seifert3,4and Paul Maanen1,4 and Christopher Obbard5 and Guillermo Payá-Vayá3,4and Holger Blume3,4 and Volker Hohmann1,2,41HörTech gGmbH, Marie-Curie-Str. 2, D-26129 Oldenburg, Germany2Medical Physics, Carl von Ossietzky Universität Oldenburg, D-26111 Oldenburg, Germany3Institute of Microelectronic Systems, Leibniz Universität, D-30176 Hannover, Germany4Cluster of Excellence “Hearing4all”564 Studio Ltd, Isle of Wight, [email protected] paper describes a new multichannel sound interface for the BeagleBone Black, Cape4all. The soundinterface has 6 input channels with optional microphone pre-amplifiers and between 4 and 6 outputchannels. The multichannel sound extension capefor the BeagleBone Black is designed and produced.An ALSA driver is written for it. It is used withthe openMHA hearing aid research software to perform hearing aid signal processing on the BeagleBone Black with a customized Debian distributiontailored to real-time audio signal processing.KeywordsHearing aids, audio signal processing, sound hardware1IntroductionHearing aids are the most common form of mitigation for mild and moderate hearing losses.Hearing aids help the wearer to follow conversations and acoustic events in different situations.In the complex acoustic environments that weencounter in our daily life, information aboutthe acoustic scene is inferred at higher stages ofthe human auditory system and exploited in thebrain for, e.g., speech understanding. A hearingloss causes — in addition to reduced sensitivityto soft sounds — a partial loss of this information. Effective signal processing algorithms arerequired for compensation. For this reason, improving signal processing in hearing aids is anactive research topic.Part of the work in hearing aid research isto develop novel signal processing algorithmsthat can be used in hearing aids to improve thehearing experience for hard-of-hearing people.Usually, simulations are run and evaluated interms of objective measures after such an algorithm has been developed mathematically. Re-sults from simulations do not necessarily reflectthe benefit of the algorithm a) when integratedin a complete signal processing chain of a hearing aid and b) in a real-world scenario. To assessthe usefulness of new hearing aid algorithms forhearing-impaired people, new potential hearingaid signal processing algorithms also have to betested with hearing impaired test subjects in realistic situations. Running an algorithm undertest on an end-user hearing device is practicallyinfeasible as it requires access to a proprietarysystem of a hearing aid manufacturer, and alarge effort for the down-to-hardware implementation is required on such devices. Instead, asoftware platform can be used to simulate thehearing aid processing chain. The open Master Hearing Aid (openMHA, [HörTech gGmbHand Universität Oldenburg, 2017], [Herzke etal., 2017]) is such a platform. openMHA canbe utilized to conduct field tests of hearing aidprocessing methods running on portable hardware.The following sections first introduce thesoftware and hardware platforms utilizable toevaluate hearing aid algorithms with hearingimpaired test subjects. We work out the needfor a custom multichannel sound interface for asmall, portable computer. The subsequent sections report on the hardware design process thatresulted in the Cape4all 1 BeagleBone sound interface, the sound driver development, and finally the possible usage of the sound interfacefor hearing aid research.1developed in the cluster of excellence “Hearing4all”

2Software and Hardware Platformfor Hearing Aid ResearchHörTech and the University of Oldenburg havedeveloped the openMHA [HörTech gGmbH andUniversität Oldenburg, 2017], [Herzke et al.,2017] software platform for the developmentand evaluation of hearing aid algorithms, whereindividual hearing aid algorithms can be implemented as plugins and loaded at run-time. Theplatform provides a set of standard algorithmsto form a complete hearing aid. It can process audio signal in real-time with a low delay( 10 ms) between sound input and sound output. (The actual delay depends on the soundhardware used for input and output, configuration options like sampling rate and audio buffersize, and also on delay introduced by some signal processing algorithms.)In its current version 4.5.5, the openMHAsoftware platform can execute on computerswith Linux and Mac OS operating system, e.g.,in a laboratory environment. Toolboxes for generating virtual sound environments in a laboratory exist (e.g. TASCAR [Grimm et al., 2015])but the sound environment in a lab — and evenmore the subject behavior in a lab environment— will always differ from real environments encountered by hearing aid users in real life. Totest real-life situations, we have to go outsideand into real situations with hearing-impairedusers wearing a mobile computer that executesthe openMHA and provides the first chance totest new algorithms in real-world situations. Inthe past, we have used laptops for this purposebut with the advent of small, ARM-based singleboard computers like the Raspberry Pi, BeagleBone, and several others these become an option for executing openMHA that imposes lessweight to carry around for the test subjects.The processing power of these devices is significantly lower than that of PCs and laptops,which will always limit the extent and setup ofalgorithms that can be executed on such a mobile platform (compared to a PC).openMHA is meant as a common platform tobe used by different hearing aid research labsto combine their work. By providing a solidbase platform, we want to encourage researchersto implement and publish their algorithms asopenMHA plugins so that work can be sharedand results can be reproduced by independentlabs.For this purpose, openMHA includes a toolbox library that already contains functions andclasses useful to more than one algorithm tospeed up implementation of new algorithms. Asa key to usability of the software in differentusage scenarios openMHA also includes severalmanuals for different entry levels ranging fromplugin developments over application engineering based on available plugins and functionalityto the application of the software in the context of audiological research and hearing aid fitting controlled through a graphical user interface (GUI). Step-by-step tutorials on the implementation of openMHA plugins as well as examples of configurations are provided to enablean autonomous familiarization for new users.Some hearing aid algorithms — such as directional microphones — need to process the soundfrom more than one microphone per ear whichis why a multichannel sound card is generallyneeded to capture the sound from all hearingaid microphones. Professional sound cards canbe used for this purpose in stationary laboratorysetups. Bus-powered USB sound cards can beused with laptops in mobile evaluation setups,but the choice of bus-powered interfaces withmore than 2 input channels is limited. We haveobserved that the total delay between input andoutput sounds that can be achieved with USBsound cards is always larger than what can beachieved with similar sound cards with PCI orExpresscard interface. This difference in delayis in the order of 2 ms, which will already affectsome hearing aid algorithms. We have also observed that the delay may vary from one startof the sound card to the next with USB soundcards, in the range of 1 ms, which is detrimental to some processing algorithms such as acoustic feedback reduction. (Feedback reduction algorithms are an essential part of a hearing aidprocessing chain and need the system to be asinvariant as possible to work effectively.) TheInter-IC Sound (IIS or I2 S) bus — transportingsound data from the SoC2 to the audio codecswith the AD/DA converters (and back) — isaccessible on expansion headers on many of thesingle-board ARM computers, making it possible to create custom sound interface hardware.Third parties already provide multichannelsound interfaces for popular boards like the BeagleBone Black and the Raspberry Pi. Of thesetwo devices, the BeagleBone Black has the advantage of hardware support for multichannel2Abbreviation for System on a Chip, the combinationof a microprocessor and several peripherals (e.g. graphicsunit, sound interface) on a single chip.

audio input/output. See Section 3.1 for details.One multichannel sound interface option forthe BeagleBone Black is the BELA cape [Moroet al., 2016]. It provides stereo in/out and additional 8 analogue data acquisition channels.These additional 8 analogue data acquisitionchannels can also be used to capture audio butdo not provide anti-aliasing filters, and achievable sampling rates depend on the number ofchannels in simultaneous use. The BELA capemakes use of real-time hardware present on theBeagleBone Black. Audio processing algorithmscan be compiled to execute on this real-timehardware, process the input channel data, andproduce output channel data. Existing Linuxaudio processing applications using ALSA3 orJACK4 [Davis, 2003] and common features ofthe operating system cannot execute on thisreal-time hardware.Another multichannel audio interface developed for BeagleBone platforms is the CTAGface 2 4 [Langer and Manzke, 2015], [Langer,2015]. Its hardware design is available opensource from GitHub and drivers have been included in official BeagleBoard SD card images.Providing capabilities for multichannel signalprocessing this device is in principle suitable forhearing aid processing on the BeagleBone Black.A drawback that remains here is the necessity toadd external power supply for the microphonesconnected to the device.The Octo Audio Injector sound card offers 6 input channels and 8 output channels forthe Raspberry Pi. Although the Raspberry Pioffers no hardware support for more than twosound channels, this sound card manages to offer enough input channels to connect 2 hearing aids with 3 microphones each. A disadvantage of this sound card for hearing aid researchis that additional external microphone preamplifiers are needed to raise the microphone signals to line level, which adds to the hardwarethat test subjects would have to carry around.An example setup for teaching hearing aid signal processing [Schädler, 2017], [Schädler et al.,2018] uses the stereo version of this sound card3Acronym for Advanced Linux Sound Architecture,name for a system of Linux kernel sound card driversand user space API to exchange sound data with thesedrivers.4Self-referencing acronym for JACK Audio Connection Kit, a user-space server application and library toconnect inputs and outputs of audio applications andsound cards.Figure 1: Cape4all with two hearing aids (eachcontaining three microphones) connected.together with external microphone preamplifiers.3Development of the Cape4allMultichannel Sound Interface forHearing Aid ResearchWe have a need for a compact multichannelsound interface for a single-board ARM computer with integrated microphone pre-amplifiersfor hearing aid research. Since such a multichannel sound interface was not available, wedecided to develop such a sound interface ourselves.3.1Choice of ARM Board Basis for aMultichannel Sound CardIn the ongoing developments of the Cluster ofExcellence ”Hearing4all”5 several audio interfaces were developed proving the inter IC sound(IIS or I2 S) in combination with the AnalogDevices ADAU1761 [Analog Devices Inc., 2009]stereo audio codec useful [Seifert et al., 2015].To gain multichannel capabilities, a time division multiplex (TDM) scheme specified for I2 Sis used. The chosen ADAU1761 codecs supporta TDM output scheme. To allow the usage incombination with an ARM-based platform andtherefore with openMHA, the BeagleBone Blackwith native I2 S TDM support by the integratedMcASP6 interfaces was chosen.3.2 Hardware DesignThe Cape4all hardware was designed by theLeibniz University Hannover based on [Seifert56 for Multichannel Audio Serial Port.

et al., 2015].In addittion to the I2 S TDM output capabilities the Analog Devices ADAU1761 audiocodecs have integrated microphone amplifiers.Up to 3 microphones for each ear on a bilateralfitting are assumed in the context of hearingdevice development. Therefore, 3 stereo audiocodecs are integrated on the Cape4all PCB7 allowing up to 6 input and output channels simultaneously. Due to the TDM scheme, only fivesignal connections are required to transport andsynchronize all 3 codecs with 6 input and output channels and the McASP interface of theBeagleBone Black.The board provides standard stereo jacks forconnecting off-the-shelf sound hardware as wellas pin headers for custom designs. 3 stereojacks are mounted on the board for the 6 input channels, and 2 additional stereo jacks forthe first 4 output channels. The remaining output channels are only accessible through thepin headers. An on-board voltage regulatorprovides microphone bias voltage which can beswitched on and off as needed and routed todifferent connectors. The bias voltage can bealtered by exchanging on-board resistors. Formore details, see the reference manual provided with the hardware design files and thedriver as download from Figure 1 showsthe hardware in use.3.3Hardware Tests and DesignRevisionsIn the testing process of previously built audio interface boards using the ADAU1761 stereoaudio codecs, it was revealed that the internal components of the codecs create bus collision. The I2 S TDM bus digital output pins ofthe codecs do not provide high-resistance state,driving the signal high or low preventing another codec to put data on the same signal. Thedocumentation of the codecs did not give anydetails helping to avoid the bus collision. In order to avoid this, an OR-gate was added to theboard design to merge the signals of the codecsto one signal. This solves the problem on voltage level but does not prevent timing collisiondue to wrong configuration of the codec outputs. The correct codec configuration is ensuredby the ALSA driver (see Section 4). In normalTDM configuration, filling 6 of the available 8timeslots, all 3 audio codecs are working cor7Abbreviation for Printed Circuit Board.rectly. For further details on I2 S TDM signalingsee [Seifert et al., 2013].3.4 Release as Open HardwareThe hardware design files are released under the Creative Commons Cape4all.4Driver developmentThe ALSA sound driver for the Cape4all soundinterface was developed by 64 Studio.As the Linux kernel already has support forboth the McASP Audio Serial Port [Pandey etal., 2009] used on the BeagleBone Black and theADAU1761 codec [Clausen, 2014] used on theCape4all, the development by 64 Studio was tocreate a glue-driver explaining to the SoC theorder the codecs are arranged on the Cape4all.The driver registers the cape as effectively onePCM device with three mixer sub-devices (corresponding to the three physical ADAU1761codecs), each with their own set of controls inthe ALSA mixer. Also, the driver sets up thecodec’s clock-path, TDM slots and various otherdefault settings.As the driver exposes the Cape4all as a regular ALSA device with three mixer sub-devices,each with their own ALSA controls, applicationsoftware may communicate with these deviceswithout any modifications.4.1 LimitationsThe McASP used on the BeagleBone Black isclocked from a 24.576 MHz crystal. This limitsthe available sample rates to be a whole divisorof this clock, for instance 24 kHz or 48 kHz isacceptable but 22.05 kHz or 44.1 kHz is not.The ADAU1761 codecs do not directly support sharing 6 channels between 3 separatecodecs on a TDM bus. As a workaround, theTDM mode for transferring 8 channels is used,where 2 channels contain no data. A consequence is that the sound card appears to have 8channels in ALSA but only the first 6 channels,corresponding to the physical channels, shouldbe used.4.2 ReleaseThe driver code is released as open source software under the GNU General Public License,Version 2 or later, in the same git repositoryas the hardware design files on GitHub,

5UsageAs Linux distributions created by SoC development board manufacturers are typically notbeing suited to audio signal processing and contain a lot of applications that are not useful inthis context, a custom Debian distribution hasbeen prepared by 64 Studio. E.g. the JACK Audio Server contained in this custom distributionwas built without DBUS support to allow thesystem to run without a GUI and the final Debian system was tweaked by 64 Studio for basicreal-time performance. An image file containing this distribution is available for downloadtogether with the hardware design. It containsjust the software needed to run openMHA, hasdevice-tree and custom Kernel built-in as wellas custom tweaks for increased real-time audioperformance.These steps are needed to prepare a BeagleBone Black for multichannel signal processingwith openMHA and Cape4all: Download and copy image to SD-card Download and compile openMHA on thesystem Set up system for higher audio performanceaccording to manual provided Start JACK Audio Server with settings according to the openMHA configuration tobe run Read example configuration provided withopenMHA and start processingThe openMHA processes can be accessed atruntime through a TCP/IP connection. Thisconnection can be used to read out and changeparameters of the running system. By thismeans it is possible to run a GUI on a laptop ortablet computer that can be used to control theprocessing parameters remotely. For details, refer to the openMHA application manual.6ConclusionsCape4all is a working, multichannel sound interface for the BeagleBone Black with integrated microphone pre-amplifiers which makesit suitable for hearing aid research, where preamplifiers are essential and where a small formfactor matters.A working ALSA driver has been developedthat takes care of the proper initialization ofthe codecs and the multichannel capabilities ofthe BeagleBone Black and then drives the multichannel sound exchange between user spaceapplications and the codecs on the sound interface.Both, the hardware design files and thedriver, have been published with open licenseson GitHub, its current state, the Cape4all can be runtogether with a JACK Audio Server on a BeagleBone Black reliably with a 4 ms buffer (128samples per channel) at a 32 kHz sampling rate.This is the state directly after driver development before any optimization towards shorteraudio buffers has been performed. This currentstate is an important step towards our goal of amobile hearing aid algorithm evaluation setup,but it needs to be improved to achieve the target overall audio delay below 10 ms between input and output sounds, considering that someof the algorithms will add a small algorithmicdelay. Therefore, we are going to further optimize the driver in collaboration with 64 Studioafter the initial release to enable smaller audiobuffer sizes.7AcknowledgementsThis work was supported by the German Research Foundation (DFG) Cluster of ExcellenceEXC 1077/1 ”Hearing4all”.Research reported in this publication wassupported by the National Institute On Deafness And Other Communication Disorders ofthe National Institutes of Health under AwardNumbers R01DC015429. The content is solelythe responsibility of the authors and does notnecessarily represent the official views of the National Institutes of Health.ReferencesAnalog Devices Inc. 2009. ADAU1761 –SigmaDSP stereo, low power, 96 khz, 24-bitaudio codec with integrated PLL. sheets/ADAU1761.pdf.Lars-Peter s/adau17x1.c.Paul Davis. 2003. Jack audio connection kit. Grimm, Joanna Luberadzka, TobiasHerzke, and Volker Hohmann. 2015. Tool-

box for acoustic scene creation and rendering(TASCAR) – render methods and researchapplications. In Proceedings of the Linux Audio Conference, pages 1–7, Mainz. JohannesGutenberg-Universität.Tobias Herzke, Hendrik Kayser, FrasherLoshaj, Giso Grimm, and Volker Hohmann.2017. Open signal processing software platform for hearing aid research (openMHA). InProceedings of the Linux Audio Conference,pages 35–42, Saint-Étienne. Université JeanMonnet.HörTech gGmbH and Universität Oldenburg.2017. openMHA web site on GitHub. Langer and Robert Manzke. 2015.Linux-basedlow-latencymultichannelaudio system (CTAG face2—4). -latency-multichannelaudio-system-2/.Henrik niedrigerLatenz.Giulio Moro, Astrid Bin, Robert H Jack,Christian Heinrichs, Andrew P McPherson,et al. 2016. Making high-performance embedded instruments with bela and pure data.Nirmal Pandey,Suresh Rajashekara,and Steve davinci-mcasp.c.Marc René Schädler, Hendrik Kayser, andTobias Herzke. 2018. Pi hearing aid. TheMagPi (Raspberry Pi Magazine), 67:34–35.Marc René Schädler. 2017. openMHA onRaspberry Pi. stopher Seifert, Guillermo Payá-Vayá,and Holger Blume. 2013. A multi-channel audio extension board for binaural hearing aidsystems. In Proceedings of ICT. OPEN. Conference ICT. OPEN, pages 33–37.Christopher Seifert, Guillermo Payá-Vayá,Holger Blume, Tobias Herzke, and VolkerHohmann. 2015. A mobile SoC-based platform for evaluating hearing aid algorithmsand architectures. In Consumer ElectronicsBerlin (ICCE-Berlin), 2015 IEEE 5th International Conference on, pages 93–97. IEEE.

Open Hardware Multichannel Sound Interface for Hearing Aid Research on BeagleBone Black with openMHA: Cape4all Tobias Herzke 1 ;4and Hendrik Kayser 2 and Christopher Seifert3;4 and Paul Maanen1;4 and Christopher Obbard5 and Guillermo Pay a-Vay a3;4 and Holger Blume3;4 and Volker Hohmann1 ;2 4 1H orTech gGmbH, Marie-Curie-Str. 2, D-26129 Oldenburg, Germany 2Medical