1. Things you need to know about the new ‘Conversations’ PM system:

    a) DO NOT REPLY TO THE NOTIFICATION EMAIL! I get them, not the intended recipient. I get a lot of them and I do not want them! It is just a notification, log into the site and reply from there.

    b) To delete old conversations use the ‘Leave conversation’ option. This is just delete by another name.
    Dismiss Notice

A bit of filter theory

Discussion in 'd.i.y.' started by philiphifi, Sep 12, 2019.

  1. ToTo Man

    ToTo Man the band not the dog

    I've never heard an NOS DAC but proponents of them often cite a smooth and slightly rolled-off top end as a reason for using them, an observation that appears contrary to the above?!
  2. Arkless Electronics

    Arkless Electronics Trade: Amp design and repairs.

    Indeed but as I said they vary widely.... some have no effect below about 200KHz! I'm assuming HF crap present beyond 50KHz....

    I'm more concerned about amplifier effects than tweeter ones personally.
  3. John Phillips

    John Phillips pfm Member

    I don't think it's as contradictory as it may seem. As may happen in NOS DACs (IIUC) if you reconstruct the audio at the DAC's output by just creating an output voltage corresponding to each digital sample and holding that voltage constant until the next digital sample (AKA zero-order hold) then the audio signal's frequency response gets shaped by a sinc function. And that does indeed roll off the audio at high frequencies if not corrected by an inverse-sinc filter. That's independent of any issues to do with ultrasonic aliases.
  4. Julf

    Julf Evil brother of Mark V Shaney

    That rolloff still allows a fair bit of harmonics / HF energy.
  5. davidsrsb

    davidsrsb pfm Member

    It depends on the music, a high level sine wave at 1kHz might be better than that, a medium level sine at 10kHz will be far-far worse
  6. Jim Audiomisc

    Jim Audiomisc pfm Member

    One way to look at the HF 'rolloff' effect of not using a filter is that the 'missing' signal power is being presented instead as components above Nyquist.

    The problem then is that following kit may - via nonlinearity - then generate difference products at lower i.e. audible frequencies. I did write a general blurb on this years ago but can't recall if I ever published it! I'll have a dig as it has some pretty diagrams, etc.
    darrenyeats and Julf like this.
  7. Julf

    Julf Evil brother of Mark V Shaney

    Indeed. And my point was that that is still an issue with zero-order hold.

    Absolutely. Any non-linearity will always result in intermodulation products.
  8. John Phillips

    John Phillips pfm Member

    I think I may have been unclear. If you do the two elements in the second set of bullets in my post perfectly then that is mathematically equivalent to reconstruction.

    In the real world you cannot do the mathematics perfectly. You can only do it sufficiently. So I was saying that it seems better to look separately at those two elements of reconstruction, not go down the "perfect mathematics" route as Chord tries hard to approach.
  9. Julf

    Julf Evil brother of Mark V Shaney

    Not sure I agree. Reconstruction is totally defined mathematically. What does "not audibly damage the wanted signal" mean in mathematical terms?

    In a proper conversion process, you can't do the parts separately. The "perfect mathematics" route is how it has been done for the last 80 years or so, ever since Claude Shannon and Harry Nyquist.
  10. Jim Audiomisc

    Jim Audiomisc pfm Member

    WRT 1st point: The snag being that in the real cases we have to impliment that mathematics with engineering that lacks perfection. Be that via digital or analogue filtering, etc. All we can get is 'close enough that the level of error is too small to matter' for the purpose the system was built.

    WRT 2nd point: However we can chose to reconstruction in various ways, some being partly digital and partly analogue. These aren't separate because they have to work together, but they are seperate in terms of the way they get built.
  11. John Phillips

    John Phillips pfm Member

    It's really difficult to go into details in this type of forum. But AIUI the perfect reconstruction filter has a sinc impulse response. The Fourier transform of that is a perfect low-pass filter with two elements: complete rejection of signals in the stop-band (the aliases); and completely flat pass-band response (remembering that the FT has both amplitude and phase/time delay).
    I'm sorry I was not clearer.
    I didn't say do the elements separately, I said look at them separately (in particular look at what stop-band requirements are needed and what pass-band requirements are needed in the real world). In practical engineering there are always imperfections. If a designer grasps that, the complexity of a mathematically proper sinc filter may not be required to get good enough stop-band (ultrasonic alias) rejection, and a flat enough pass-band (amplitude and time delay).

    EDIT: and BTW, there are definitely people who prefer a reconstruction filter with an impulse response that isn't the "perfect" sinc function (although I didn't hear much difference when I tried various types). ​
  12. Julf

    Julf Evil brother of Mark V Shaney

    OK, that makes sense. Thanks for the clarification.
  13. John Phillips

    John Phillips pfm Member

    On the subject of reconstruction filters, has anyone read the HiFi Critic article on the Chord M Scaler here?

    It's very well written, by Keith Howard, and gets to my understanding on real-world reconstruction filters by the final paragraph on the second page ("page 17").

    Howard criticizes this understanding and raises the question of inter-sample reconstruction versus filter coefficient length at the end of column 1 on the third page ("page 18"): "The obvious question is: by how much must [the envelope of the sinc function] decay for its contribution to inter-sample wave shape to become insignificant?"

    However he then seems to duck the questions with a hand-waving argument: "That’s not a straightforward question to answer but if we say 100dB, to take the envelope amplitude below the 16-bit noise floor for a 0dBFS (full scale) sample, we can easily calculate what excerpt of the sync function is required." This seems to me to be a non-sequitur connecting sampling/reconstruction with quantization without giving a reason. However the rest of the article seems to be based on this connection.

    I have done practical work on digital audio systems (though not for HiFi applications) but never managed to see the connection Howard postulates above. Have I missed this and if so what is the connection?
  14. Jim Audiomisc

    Jim Audiomisc pfm Member

    Short answer: I'm not sure. :)

    Longer speculation: This might be assessed by considering the reconstruction filter as being a pure 'infinite' (sic) length sinc function convolved/windowed with a cutoff weighting function that is all-zeros beyond a given span. Then calculate the max possible energy fraction this produced compared with a perfect 'infinite' sinc. If that fraction is equivalent to below the dither, then forget it?

    Not read the article (yet) though. So made that up. :)
  15. John Phillips

    John Phillips pfm Member

    I will think on that but it does not immediately cause a lightbulb moment. My knowledge of digital audio processing was gained through practical experiment building lab. demonstrator systems and there is probably quite a lot of theory I may have not picked up from just reading the textbooks and literature at the time.

    My original question is based on an assumption that a FIR digital filter however long or short it is and however truncated the coefficients, is characterized by the frequency and phase response, provided the bit depth of the signal path processing (including dither) is appropriate to the signal being filtered. So you choose the filter length and coefficient values based on pass-band and stop-band requirements in the usual fashion. There's still no useful connection I can yet see between the noise floor of the signal and the choice of filter length or resolution of the filter coefficients. But perhaps I am still missing something.

    I do have a simulation in GNU Octave of a FIR/sinc filter where I can change the FIR length and how the sinc coefficients are windowed and truncated. So far the simulations show what I would expect in the amplitude / frequency response, with no significant changes away from flat in the group delay. It was quickly thrown together so I have to be wary of errors on my part. But I can't yet reconcile it with the connection made in the article, which bugs me.
  16. Julf

    Julf Evil brother of Mark V Shaney

    A real problem with digital audio is that a lot of it is counter-intuitive (and requires some non-trivial mathematics).
  17. IDM

    IDM pfm Member

    Possibly slightly off topic but given the interesting comments on Non-OS versus over-sampled and digital filtering. Does someone know how some of the raspberry pi streaming software packages such as Moode work? Does the software provide filtering or oversampling?

    Moode provides an I2S signal which i am currently feeding into a TDA1541a DAC. It had never occurred to me to think about what level if any of digital filtering is being performed. My lack of thought is purely due to my ignorance on the whole subject but would really like to know if my DAC is producing lots of high frequency rubbish or not.

  18. JensenHealey

    JensenHealey pfm Member

    Streamers would ordinarily not do any digital filtering. The job of a streamer or streaming software is to pull in the data from where it is stored, decode it (probably if it is something like FLAC) and to deliver the music in unadulterated digital form to the next stage in the chain - often a DAC.

    All this talk of Non -OS or OS and filtering is a question of how the DAC and signal reconstruction operates.
    Julf likes this.
  19. Jim Audiomisc

    Jim Audiomisc pfm Member

    Your second point is correct, but hides the critical detail of how *accurately* one form represents the other for the *data*.

    The reply mechanism here makes math difficult to show. But the argument I put forwards earlier is based on the following.

    1) Consider the input data set as a series of values, a(i), at a series of times, t(i), which are spaced at the sample rate.

    2) Regard the filter as a set of sinc-ish coefficients b(i) at the relevant time offsets from 'now' - i.e. slide to align so that b(0) is for some chosen input sample.

    3) Each b(i) may differ from the 'ideal' sinc(i) values. In particular for large enough +/- values of i it may be zero.

    4) subtract b(i)from sinc(i). i.e. take the difference between the 'ideal' weightings and the ones you chose. This will give you an error set e(i) = sinc(i) - b(i).

    Ignore the signal. :)

    Instead look at e(i). Note its sign. Assume that the input signal is *actually* a set of values oh_bugger(i) which is always max amplitide (+/- 1 for simplicity) but has the same sign as e(i) for each i value. Add up e(i) x oh_bugger(i) values for *all* i.

    If I've guessed right that represents the worst possible error you'd get between the output and what you 'should have got if using the ideal sinc'. So it tells you the result of apodising/windowing the chosen filter as a TDF. Then up to you what choice to make.

    I've ignored the finite resolution of values - i.e. assumed all the coefficients and math are defined well enough to ensure the above is fine.

    The alternative is to use an IIR filter and say 'booger all that!' :)

    N.B. Had to use ( ) because this interface assumes square brackets mean markup!
    Julf likes this.
  20. MarkW

    MarkW Full Speed & Pagan

    The linked article gives some useful insight for those less versed in the maths as to how reconstruction filters work and why the implementation is non trivial. You can kind of infer that for the NOS brigade it all requires a bit too much thinking, so they conveniently skip it with the argument that ears are the ultimate filter. That argument works if you don't appreciate the mathematical complexity or the problems of intermodular distortion caused by out of band artifacts (just for starters).

    To Mr Howard's final request for a software that basically swaps fpga cost/complexity for large file sizes and long processing times, HQPlayer Pro from Signalyst seems to offer just that, albeit the license cost is high and the product aimed at professional studios.

Share This Page

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice