Could not connect to the database.
|Sound Quality Research Unit||www.soundquality.dk|
The sound quality inside car compartments has become of great importance for the branding and segment positioning of car models and car manufacturers. However, systematic tools are still lacking for an integration of sound quality issues within the car design process. Furthermore, the research on sound quality attributes is inconclusive, especially since perceptual, affective and connotative attributes have not been distinguished and analysed regarding their interrelations. In addition, visual features of the car itself may greatly influence the judgments of sound quality. The aim of the project is to investigate which methods and attributes are the most relevant for the evaluation of the interior car sound quality. The influence of visual input on the perceived sound quality is being investigated and quantified. The development of new tools and processes for the evaluation of sound quality is expected, in close collaboration with the project partners.
The identification of relevant auditory attributes is pivotal in sound quality evaluation. The goal of this study is to uncover relevant spatial attributes in the context of multi-channel reproduced sound. Short musical excerpts were presented in mono, stereo and several multi-channel formats to elicit various spatial sensations. Before the actual experiments, a panel of listeners was established using a selection procedure. Based on tests of their hearing thresholds, their spatial hearing, and their verbal production abilities, 40 listeners were chosen from 78 applicants. This panel was used throughout the course of this study. The first experiment aimed at an assessment the overall preference between the reproduction modes, and an exploratory analysis of the salient perceptual dimensions. The former was obtained from paired comparisons, allowing for a verification of individual consistency. The latter, multidimensional scaling (MDS), was performed on dissimilarity ratings. The outcome was a map of the stimuli where the distances between them represent the dissimilarities, in a perceptual space, the dimensions of which must be interpreted with the help of further data. The purpose of the second experiment was to elicit relevant auditory attributes which explain the preference and dissimilarity judgments. Two fundamentally different psychometric methods were employed: In the first method, called Repertory Grid Technique (RGT), subjects were asked to directly assign verbal labels to the features when encountering them, and to subsequently rate the sounds on the scales thus obtained. The second method required the subjects to consistently identify the perceptually relevant features before assigning them a verbal label. Under sufficient consistency, a lattice representation - as frequently used in Formal Concept Analysis (FCA) - can be derived to depict the structure of auditory features. Finally, in a third experiment, a set of attributes will be quantified as well as their contribution to overall preference. A necessary condition is individual consistency, which is checked by means of transitivity of paired-comparison judgments, thereby validating the respective attribute as a unidimensional construct. The results of this study should lead to a deeper understanding of the relation between spatial auditory perception and the quality of reproduced sound.
In the construction of high quality audio equipment the aim is often a situation where the listener is positioned in a relatively large room e.g. a living room. The listener is supposed to be seated in an ideal listening position with the loudspeakers in a symmetrical setup, and it is assumed that the room is more or less quiet. A very special type of listening room - that becomes relevant in connection with the development of high quality audio systems - is the car cabin. It has some distinct differences from the reference listening situation: It is relatively small, a number of listeners are seated quite closely, and there are a few more or less pronounced noise sources such as: Wind-, engine- and tire noise. It is evident that such a listening environment brings other aspects to the assessment of audio systems than the reference situation. The project aims at finding methods as well as metrics for the controlled assessment of audio systems in cars.
One of the pertinent unresolved problems in psychoacoustics is to find out which auditory sensations are elicited by acoustic stimuli. In this project, a feature-based representation of auditory stimuli is proposed, and tested experimentally. Within a measurement-theoretical framework it can be decided whether a representation of subjective judgments by a set of auditory features is possible, and how unique such a representation is. Further, the new method avoids the confusion of perceptual and verbal abilities of the listeners, in that it strictly separates the process of identifying auditory features from labeling them. In a first study, the approach was applied to simple synthetic sounds with well-defined physical properties (narrow-band noises and complex tones). For each stimulus triad, listeners had to judge whether the first two sounds displayed a common feature which was not shared by the third, by responding with a simple "yes," or "no." Due to the high degree of consistency in the responses, feature structures could be obtained for most of the subjects. In a second study, different formats of audio reproduction (mono, stereo, and various multi-channel formats) were investigated. For more than half of the sample, representations resulted which allow for interesting conclusions about the auditory features which characterize these complex sounds. In summary, the proposed procedure constitutes a valuable supplement to the arsenal of psychometric methods where the main focus is on identifying the type of sensation itself, rather than measuring its threshold or magnitude.
Instrumental sound quality analysis has been widely used as a major methodology to detect target sounds, e.g. annoying sources, emanating from the test object. Furthermore, the array techniques, such as beamforming and non-stationary spatial transformation of sound fields (NS-STSF), have been employed to localize and rank individual sound sources as a function of sound pressure, intensity, and particle velocity. But the localization of the target sounds needs a lot of engineering efforts as a result of the complicated relationship between the two technologies: sound quality analysis and array techniques. In an initial study, a loudness and sharpness mapping is being developed using Brüel & Kjær beamforming software. A 42-ch microphone array was set up to measure simulated sources generated by two loudspeakers in an anechoic room. The results of loudness and sharpness mapping revealed considerable sound source localization advantages, such as easier detection of the louder or sharper source, in comparison with traditional sound pressure mapping. Currently, practical measurements are being conducted in an engine room of the personal vehicle to validate the superiority of sound quality metrics mapping. Further studies are planned to rank the individual sources with regard to their sound quality and to assess the overall sound quality of products, which generate noise with a certain directivity in space, using these array techniques.
This work concerns the concepts of annoyance, interference,unpleasantness and attention. The definition of annoyance often states that a stimulus is annoying if it interferes with a task. Unpleasantness, however, is thought to be a stand-alone attribute not implying any interference. The main research question is whether ratings of unpleasantness of sounds given full attention are different to ratings of annoyance of the same sounds when they are in the background while a secondary task is being performed. The task in the current work is a memory task, requiring serial recall of visually-presented digits. Judgements of annoyance are collected first with full attention, and then with low attention, that is, during the task. Other research questions include whether experience of interference of the task affects post-task (i.e. full-attention) unpleasantness judgements, and to operationalize attention and interference as moderator variables in sound quality evaluation.
Current sound-quality evaluation algorithms are basically monaural. That is they are not defined for the natural case of listening with two ears receiving different input. Even for loudness, the most developed sound-quality attribute, it is unclear, how binaural loudness should be computed. Most studies investigating binaural loudness have been restricted to presenting tones (or noise) at different interaural levels via headphones. Only very few studies (the latest comprehensive one dating more than 40 years back) have addressed true directional loudness of real sound sources positioned in space. The scope of these studies shall be extended in several ways (1) by using narrow-band signals (rather than wide-band noise) that may reveal frequency specific effects of direction on loudness, (2) by studying the phenomenon at different absolute levels for which different degrees of summation are to be expected, (3) by using modern (adaptive forced-choice) psychophysical techniques to obtain unbiased loudness matches, and (4) while measuring the effective signal reaching the ear by determining individual head-related transfer functions (HRTFs). The results from the first listening experiment in an anechoic chamber show that loudness is not constant over sound incidence angles. The directional loudness matches between the loudspeakers vary over a range of 10 dB, and show considerable frequency dependency. The pattern of results also varies substantially between subjects, but can be largely accounted for by inter-individual variations in their HRTFs. In 2004, based on the results from the first experiment, binaural loudness perception of a real directional sound field was modelled. When taking the individual HRTFs into account, and investigating the effect of at-ear sound pressure levels on loudness, inter-individual variation was still retained in the data. In order to further investigate the effect of HRTFs on directional loudness, a second experiment using individual binaural synthesis was planned and set up.
A pervasive problem in sound-quality evaluation is whether observers actually judge the sound (e.g. a low-frequency rumble), or the source (e.g. the factory across the street). Very often, it is desirable to disentangle these two aspects to get separate readings, e.g. of the sensory and emotional impact of sound. In an initial study, a signal-processing scheme developed by a collaborator at the Technical University of Munich has been used to render a number of environmental sounds unidentifyable while preserving their loudness-time functions. To apply this methodology to a stimulus set covering a larger range on the loudness continuum, first a wide range of environmental noises including various product sounds was recorded. A pilot study was performed to select sounds that are highly identifiable in their original version. In the first part of the main experiment, four independent groups of subjects (N = 25 each) responded on category-subdivision scales of loudness or annoyance to either the original or processed version of the selected signals. In the second part, subjects provided ratings of the stimuli on a Semantic Differential. Data analyses will include the specification of the impact of source identifiability on loudness and annoyance judgements as well as an investigation of these effects in terms of differences in the semantic profiles elicited by the sounds.
Fluctuation strength, a sensation due to slow modulations (< 20 Hz) of amplitude or frequency, is one of the major psychoacoustic variables considered in sound quality evaluation. Zwicker and Fastl [Psychoacoustics (Springer, Berlin, 1999)] have proposed a model of fluctuation strength, which has found its implementation in various software applications, even though the data basis is rather limited. In particular, the dependency of fluctuation strength on modulation frequency and modulation depth has seemingly never been tested in a factorial design. Therefore, in Experiment 1 both of these factors were varied simultaneously in order to create 54 different frequency-modulated sinusoids. The task of the subjects was to directly estimate the perceived magnitude of fluctuation strength. The results do not conform well with the model predictions. In Experiment 2 this finding was further investigated by varying only one factor at a time. The results show that large individual differences, particularly in the effect of modulation frequency, persist. In Experiment 3, by employing a 2AFC procedure, matches in fluctuation strength were obtained. The results suggest that there exists a non-additive trade-off between modulation frequency and modulation depth. Furthermore, for low modulation frequencies the effect of modulation depth is underestimated by Zwicker and Fastl's model.
While the literature stresses the importance of expert panels and of training in evaluating sound-quality attributes, the abilities of experts, and the benefits of long-term training, have typically only been investigated for a very limited range of tasks (e.g. naming timbral qualities). By contrast, the goal of the present investigation is to develop a test battery for a wide range of auditory capabilities in order to assess individual listeners. All tasks have been set up in an objective format (e.g. identifying which of three sound samples is different) that is suited for studying both simple (e.g. loudness, pitch, or interaural level discrimination) and complex (e.g. spectral-shape discrimination) auditory phenomena. A number of tasks were selected and implemented using specialised signal-processing software. Data were then collected on 24 listeners who had been considered for participation in an expert listening panel for evaluating the sound quality of hi-fi audio systems. These data were compared to a conventional selection procedure, and listeners who were accepted for the panel were contrasted with those who were rejected. Furthermore, the test battery data were related to the performance of the listeners when judging the degradation in quality produced by audio codecs.
Only recently have models of stationary loudness been extended to handling non-stationary sounds. Some of this modelling is based on preliminary assumptions and may require further experimentation. With the ultimate goal of probing and extending existing loudness models a series of experiments has been planned, which investigate the temporal weighting listeners implicitly use when judging the overall loudness of sounds fluctuating in level. Results of the first experiment show that listeners do not employ an averaging strategy as assumed in conventional noise-evaluation algorithms, for example. Rather, the onset of the sound is perceptually emphasized, though large individual differences exist. This suggests that the judgement of the integrated loudness is accomplished at a high level of cognition. If this is indeed the case, it questions the way integration is handled by the majority of loudness models. New experiments have been planned, which in greater depth examine this question.
Multidimensional scaling is a technique that is based on dissimilarity data and yields, via statistical modeling, an optimal, low-dimensional vector space. The dimensions of this space can be interpreted as signifying relevant attributes underlying the dissimilarity judgments. This technology was applied to a heterogeneous set of environmental sounds, in order to investigate, in which way their perceptual (dis-)similarity could be represented, and which psychoacoustic, and other parameters, played a role in the representation. Moreover, the degree to which individuals corresponded in their judgments was assessed. An experimental set-up for automated stimulus presentation and response collection was developed and implemented, and data were collected on 79 subjects. The analysis revealed a three-dimensional solution; using linear regression, these three dimensions could be associated with instrumental measures of loudness (RSQ = .83) and sharpness (RSQ = .83), and with the subjectively measured unpleasantness (RSQ = .69). Further results indicate that the subjects employed largely the same criteria for their judgments.
Current research in sensory evaluation makes wide use of verbal descriptors of perceptual attributes. Typically, an expert panel is initiated who, after time-consuming discussions, and perceptual training, agrees on a set of labels to describe perceptual characteristics relevant to a class of products, be they household equipment or hi-fi loudspeakers. Problems of this approach include that both panelists within a group, and different panels tend to disagree in their choice of words. It remains unclear, however, whether these differences are due to varying labels of the same underlying attributes, or are indicative of a different semantic (or perceptual) structure altogether. In order to assess individual differences in the use of descriptors, while taking into account the semantic structure underlying the assessments, a linguistic technique called "formal concept analysis," which allows for the mathematical modeling of the interrelations of verbal concepts, was adapted to this domain. Experimental procedures and automated data analyses have been implemented and the feasibility of the technique, which had never been applied to perceptual concepts previously, has been tested. In a first application, the method was put to use in assessing the semantic structure employed by members of a "viewing panel," thus specifying (1) the consistency with which descriptors of screen-quality are used irrespective of their labeling, and (2) the individual differences in the use of descriptors. In a follow-up it was investigated whether (3) training succeeded in harmonizing the semantic structure of the panelists.
A challenging issue in sound-quality research is how to find new, as yet unknown perceptual attributes that contribute to the overall evaluation. To that end, rather than asking listeners to elaborate their verbal repertoire, it might be worthwhile to just require very simple comparative judgments from them, while deriving the underlying dimensional structure from subsequent modeling of the observers' behavior. This approach was taken by presenting a fairly large sample of 79 listeners with all possible pairs of 12 environmental sounds selected for their heterogeneity in psychoacoustical attributes. Judgments of all pairs with respect to overall unpleasantness were analyzed with regard to compliance with the Bradley-Terry-Luce, and the less restrictive preference-tree model. The latter model provided a valid representation of the paired-comparison judgments, and permitted construction of a ratio scale of unpleasantness. Furthermore, it revealed that this attribute may not be considered to be one-dimensional. Instead, three sub-groups of sounds were identified, which could be defined by their (non-acoustical) intrusiveness, and loudness. Within the sub-groups of soft and loud sounds, a combination of two instrumental sound measures, namely psycho-acoustical sharpness and roughness, the latter differing in magnitude for the two groups, explained the unpleasantness judgments very well.
In sound-quality research, a host of methodologies are employed. Often, ratings or rankings are used to assess sound-quality attributes. Sometimes, the magnitude of the sound attribute in question is directly estimated via magnitude-estimation, or production techniques. Finally, in recent years, indirect methods like multidimensional scaling, or the mathematical modeling of the listeners' cognitive decision strategies in so-called 'choice models' have been introduced to the research field. All these methodologies vary in their time-demands, they require different judgments from the listeners, and yield results of differing scale-type. In order to be able to compare some of these methods, 74 listeners were asked to make paired comparisons, magnitude estimates, similarity ratings, and to generate rank orderings of a set of environmental sounds. In result, while most methods agree on an ordinal level, the more sophisticated choice-models are both more informative with respect to the dimensional structure involved, and less compressive in the scale values obtained, when compared with the direct methods. In a parallel study, the method of magnitude-production was evaluated in greater detail, by putting the fundamental mathematical assumptions underlying the approach to an empirical test in the field of loudness fractionation. It turned out that while all subjects were able to give judgments that were valid on a ratio-scale, the numbers they used could not be taken at face value. In a second experiment, it was not possible to establish a very general class of transformation functions, that would have been able to relate the number words to their 'true,' mathematical representation.
Different sound-reproduction systems (e.g. stereo, multi-channel systems, surround sound) may be distinguished by their capability to render an authentic spatial impression of the original auditory "scene". In order to study these systems, a first requirement is to have a valid and reliable procedure by which listeners can report their perception of where a sound comes from. In the literature, various procedures ranging from drawing sketches to head pointing have been suggested, many of which either lack precision, or intuitive "naturalness". Therefore, a new hand-held laser-pointing device the position of which is being read by a magnetic tracker was developed. This device permits to naturally point at the direction of the source while receiving visual feedback from a small laser-projected dot. In a first, methodological experiment, the new laser-pointing technique was evaluated in comparison with a graphical procedure in which listeners had to make a mark on a line in order to indicate, where the sound was perceived to emanate from. In a formal experiment with 11 participants, the new technique was found to be superior both in its precision (ca. 1.5 to 2 degrees), and accuracy of identifying both real and "virtual" sources. In a second experiment, the effect of head movements on the pointed angle was quantified. This technique was used in an investigation of the effect of loudspeaker directivity on the perceived direction of stereo-panned sound sources.