Quantcast
Channel: soundcheck's - audio@vise
Viewing all 109 articles
Browse latest View live

flac on steroids - Part 1

$
0
0
Today I'd like to share my little "flac on steriods" project. 
...obviously inspired by "sox on steriods" ;)

I was triggered by the recent benchmark announcements on Phoronix

The promise: flac now delivers a 5% faster encoding and decoding 
by introducing a faster CRC algorithm. 

That sounds nice! 

Let's have a closer look at it.






There's not been that much evolution on flac lately. Great that somebody took the effort.

The main issue you'll face: 

How to get the flac beast with that updated CRC algorithm on your machine!?!?

Bad luck for most of you. You have to wait. 

You'd need to have flac version greater than 1.3.2 installed to have that feature inside.
1.3.3 is not even released yet. 
And if that's done one day your OS maintainers still need ages to get it introduced. 
For LMS users it'll take even longer.

So. 99.99% of you won't have the pleasure to enjoy the extra power for now.


Ok. What now? As usual. If you want bleeding edge stuff, there's no other way
as building the binary yourself. It's done pretty straight forward though.



Some background info affecting the flac binary performance. 

flac offers several options to seriously improve its performance - just from the code perspective! 
E.g. flac can make use of sse, sse2, avx2.  These CPU features mainly apply to Intel platforms though! 
Ever wondered why flac is that slow on a RPI?
Further flac can make use of C++ or assembler (nasm)

There are quite some variables around. 
The usual issue: You just don't know how your flac was compiled and if it makes use of any of these "turbos".


Bottom line: The way flac gets compiled - and that includes the target CPU architecture - can have a huge impact on its performance! 
Compiling it by yourself I consider a pretty good idea!


I ran my own compiled flac on my Intel NUC with all performance options switched on.




Let's have a look at the benchmark. 

I am gonna try to reproduce the promised results of "+5%"first.


BTW: 
As benchmark tool I'm using "perf" now. It seems to be reliable and more precise  
compared to e.g. "time" as used for benchmarking sox earlier.

Preps:


  1. I reinstalled the Ubuntu flac and libs (dynamic linked binary)
  2. I then downloaded the Ubuntu flac sources and did a static compilation
  3. And I fetched the flac sources from git and compiled that statically
  4. I ran the encode and decode benchmarks

And here comes the result:

Binary = /tmp/flac-1.3.2-ubu
Performance counter stats for '/tmp/flac-1.3.2-ubu --totally-silent --compression-level-5 -f -o /tmp/test16.flac.flac-1.3.2-ubu /tmp/test16.wav' (10 runs):
   1031,998175      task-clock (msec)         #    1,000 CPUs utilized            ( +-  0,07% )
6 context-switches # 0,006 K/sec ( +- 16,01% )
1 cpu-migrations # 0,001 K/sec ( +- 36,85% )
192 page-faults # 0,186 K/sec ( +- 0,41% )
2.757.615.568 cycles # 2,672 GHz ( +- 0,07% )
5.792.144.336 instructions # 2,10 insn per cycle ( +- 0,03% )
423.397.735 branches # 410,270 M/sec ( +- 0,06% )
11.845.109 branch-misses # 2,80% of all branches ( +- 0,03% )

1,032314326 seconds time elapsed ( +- 0,07% )

Binary = /tmp/flac-1.3.2-ubu-static
Performance counter stats for '/tmp/flac-1.3.2-ubu-static --totally-silent --compression-level-5 -f -o /tmp/test16.flac.flac-1.3.2-ubu-static /tmp/test16.wav' (10 runs):
   1046,480818      task-clock (msec)         #    1,000 CPUs utilized            ( +-  0,07% )
5 context-switches # 0,005 K/sec ( +- 14,30% )
0 cpu-migrations # 0,000 K/sec ( +- 44,72% )
184 page-faults # 0,176 K/sec ( +- 0,24% )
2.801.189.305 cycles # 2,677 GHz ( +- 0,07% )
4.776.156.386 instructions # 1,71 insn per cycle ( +- 0,03% )
403.541.845 branches # 385,618 M/sec ( +- 0,06% )
11.491.004 branch-misses # 2,85% of all branches ( +- 0,05% )

1,046770327 seconds time elapsed ( +- 0,07% )

Binary = /tmp/flac-git-static
Performance counter stats for '/tmp/flac-git-static --totally-silent --compression-level-5 -f -o /tmp/test16.flac.flac-git-static /tmp/test16.wav' (10 runs):
    923,622729      task-clock (msec)         #    1,000 CPUs utilized            ( +-  0,09% )
4 context-switches # 0,005 K/sec ( +- 18,62% )
0 cpu-migrations # 0,001 K/sec ( +- 33,33% )
180 page-faults # 0,195 K/sec ( +- 0,21% )
2.472.003.020 cycles # 2,676 GHz ( +- 0,07% )
5.108.543.740 instructions # 2,07 insn per cycle ( +- 0,03% )
541.381.977 branches # 586,151 M/sec ( +- 0,05% )
11.537.502 branch-misses # 2,13% of all branches ( +- 0,03% )

0,923934894 seconds time elapsed

Result:
The results show an around 11% increase of the flac made from git sources on the encode side - against both Ubuntu versions (repo binary and self compiled) having CRC optimizations not yet applied. 
11% gain of the CRC improved binary. Nice! More then expected.
Somehow the binary compiled from Ubuntu sources shows a slightly lower performance then the dynamically linked Ubuntu version. Let's just accept that as it is. We made our case.

I then also did the decode test:

Binary = /tmp/flac-1.3.2-ubu
Performance counter stats for '/tmp/flac-1.3.2-ubu --totally-silent -d -f -o /tmp/test16.wav.flac-1.3.2-ubu /tmp/test16.flac' (10 runs):
    566,553464      task-clock (msec)         #    0,999 CPUs utilized            ( +-  0,24% )
4 context-switches # 0,007 K/sec ( +- 15,09% )
0 cpu-migrations # 0,000 K/sec ( +- 66,67% )
128 page-faults # 0,225 K/sec ( +- 0,50% )
1.511.998.785 cycles # 2,669 GHz ( +- 0,16% )
3.580.347.563 instructions # 2,37 insn per cycle ( +- 0,07% )
214.363.822 branches # 378,365 M/sec ( +- 0,20% )
5.272.298 branch-misses # 2,46% of all branches ( +- 0,05% )

0,566851320 seconds time elapsed ( +- 0,24% )

Binary = /tmp/flac-1.3.2-ubu-static
Performance counter stats for '/tmp/flac-1.3.2-ubu-static --totally-silent -d -f -o /tmp/test16.wav.flac-1.3.2-ubu-static /tmp/test16.flac' (10 runs):
    516,027060      task-clock (msec)         #    0,999 CPUs utilized            ( +-  0,97% )
3 context-switches # 0,006 K/sec ( +- 13,13% )
0 cpu-migrations # 0,000 K/sec ( +-100,00% )
119 page-faults # 0,231 K/sec ( +- 0,37% )
1.363.596.089 cycles # 2,642 GHz ( +- 0,15% )
3.378.787.107 instructions # 2,48 insn per cycle ( +- 0,08% )
213.400.313 branches # 413,545 M/sec ( +- 0,21% )
5.093.116 branch-misses # 2,39% of all branches ( +- 0,03% )

0,516293944 seconds time elapsed ( +- 0,97% )

Binary = /tmp/flac-git-static
Performance counter stats for '/tmp/flac-git-static --totally-silent -d -f -o /tmp/test16.wav.flac-git-static /tmp/test16.flac' (10 runs):
    488,574913      task-clock (msec)         #    0,999 CPUs utilized            ( +-  0,37% )
2 context-switches # 0,005 K/sec ( +- 20,10% )
0 cpu-migrations # 0,000 K/sec
118 page-faults # 0,241 K/sec ( +- 0,31% )
1.297.780.573 cycles # 2,656 GHz ( +- 0,16% )
3.044.344.214 instructions # 2,35 insn per cycle ( +- 0,09% )
180.420.141 branches # 369,278 M/sec ( +- 0,24% )
5.077.955 branch-misses # 2,81% of all branches ( +- 0,16% )

0,488829035 seconds time elapsed ( +- 0,37% )

Result:

On the decode task a 14% gain of the new CRC optimized flac from git sources against the stock dynamic linked Ubuntu was found. A lot more than the folks over at flac promised.
There's "just" a "5%" increase against the Ubuntu sources compiled with "-O3 -march=broadwell". The decode and encode seems to have a different impact on the two different Ubuntu based binaries. Honestly. I don't feel motivated to look deeper into it for now.
It won't add anything much of relevance to the actual story.

Bottom line. Well done flac designers! You lived up to your promises. Your efforts are highly appreciated.

Enjoy.

PS: Above exercise and results were also discussed with the flac designers. 

********************************************************************************************************
Benchmarking test procedure:

IF="/tmp/test.wavOF="/tmp/test.flac"

DURATION="$(soxi -d $IF)"BITRATE="$(soxi -b $IF)"SAMPLERATE="$(soxi -r $IF)"
COMPRESSIONLEVEL="5"

echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

echo "****************"echo " DURATION:$DURATION"echo " SAMPLERATE:$SAMPLERATE"echo " BITRATE:$BITRATE"echo " COMPRESSION:$COMPRESSIONLEVEL"
rm $OF.* 2>/dev/null

for i in flac-1.3.2-ubu flac-1.3.2-ubu-static flac-git-static ; do
BIN="/tmp/$i"echo "****************************"echo "Binary = $BIN"perf stat -r 10 -B $BIN --totally-silent --compression-level-$COMPRESSIONLEVEL -f -o $OF.$i $IFsleep 3sync
echo
done
*************************************************************************
Compiling flac:

I'll show you now how to compile a static flac binary on Ubuntu or other Debian based systems. Open a terminal first.

I won't compile libogg support into the binary.

*************************************

sudo su

apt-get install build-essential libtool libtool-bin nasm


BASE=/tmp

cd $BASE
git clone https://git.xiph.org/flac.git
cd $BASE/flac 
./autogen.sh

### gcc compiler settings:
### Find out your CPU specific parameter to use for your processor family and
### replace below "broadwell" entry accordingly e.g. "haswell"

export CFLAGS='-O3 -march=broadwell'

./configure --prefix=/usr --enable-static --disable-shared --disable-ogg --disable-doxygen-docs --disable-xmms-plugin


### You should now see listed in the configuration summary:
###     SSE optimizations : ................... yes

###    Asm optimizations : ................... yes

make


ls -l ./src/flac/flac

******************************************

Here we go. It's that easy.

Now you'll have a bleeding edge high performance standalone (static) flac binary at hand. 

Note: It still says version 1.3.2 - just ignore it!

Copy it wherever you want it. 
E.g. To your LMS installation

cp ./src/flac/flac /usr/share/squeezeboxserver/Bin/x86_64-linux/















flac on steroids - Part 2

$
0
0
Basically as a fallout of my earlier flac benchmarking exercise I looked into the performance of the flac binary related to certain compression levels. It's known that the compression levels do have an impact. Let's find out what we're talking about here. 

If highest efficiency is the goal, for sure we need to look at that topic as well.



flac offers compression levels (CL) 0 to 8 and on top certain options can be applied generating a 10th format a "Non-Compressed" flac audio file.

If you'd just look at the resulting file sizes, the obvious reason for having different CLs,  of CL0 to CL8 I'd say leave this exercise alone - it's not worth it. 
However. We want to look at efficiency. And that's making the case.


Aah. Non-Compressed (NC) flac !?!? What's that !?!?

OK. The original idea floating around NC flacs is that people were looking for a .wav file wrapped
in a flac container with all its tagging features. An interesting idea. 

dbPoweramp, is the only (GUI based) tool I'm aware of, that offers the Non-Compressed option.  

You can achieve the same result by using the flac binary with following options:

"--compression-level-0 --disable-constant-subframes --disable-fixed-subframes"


By default, the flac binary applies CL5. You'd have to intervene manually to get your CL of choice.

Keep in mind! Once the files are encoded you can't figure out the compression level being used for encoding it! 
You'd need to reencode a file (or collection) to a certain CL to make sure to know what you've got in front of you. 


Let's get the work done.


For the tests I've been using the earlier discussed CRC optimized flac made from git sources.


First I generated several flacs from my  44.1/16bit  test16.wav.
flac -f --compression-level-0 -o test16-cl0.flac test16.wav
flac -f --compression-level-5 -o test16-cl5.flac test16.wav
flac -f --compression-level-0 --disable-constant-subframes --disable-fixed-subframes -o test16-nocomp-cl0.flac test16.wav
flac -f -l 0 --disable-constant-subframes --disable-fixed-subframes -o test16-nocomp-l0.flac test16.wav
flac -f -0 --disable-constant-subframes --disable-fixed-subframes -o test16-nocomp-0.flac test16.wav
(same filesize for all 3 above!)
Filesizes:
test16.wav = 103887884

test16-cl0.flac = 41017617
test16-cl5.flac = 39387134
test16-nocomp-l0.flac = 104165565
As expected the NC file slightly exceeds the original  .wav size.
The file size between C0 and C5 can (IMO) almost be neglected.
I also generated three different NC files to prove that using different options generate the
same file. (As result of a discussion I had with a flac designer)

I then executed the performance testing. I ran each of the tests several times.
And the tool itself ran 10 loops.

I used the new CRC optimzed flac build from git sources with gcc opts set to "-O3 -march=broadwell",  avx2 and nasm in place.
Here's the procedure:

########################################################################
BIN=/tmp/flac-git-opt
echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
sleep 2
for i in test16-cl0.flac test16-cl5.flac test16-nocomp-cl0.flac ; do
echo "************************"
echo "
$i"
perf stat -r 10 -B $BIN --totally-silent -f -d /tmp/$i
sleep 3
sync
echo
done
########################################################################


And here are the results:

****test16-cl0.flac
Performance counter stats for '/tmp/flac-git-opt --totally-silent -f -d /tmp/test16-cl0.flac' (10 runs):
    620,669145      task-clock (msec)         #    0,999 CPUs utilized            ( +-  0,36% )
4 context-switches # 0,007 K/sec ( +- 12,77% )
0 cpu-migrations # 0,000 K/sec
102 page-faults # 0,164 K/sec ( +- 0,31% )
1.659.587.593 cycles # 2,674 GHz ( +- 0,21% )
3.544.392.275 instructions # 2,14 insn per cycle ( +- 0,00% )
265.420.089 branches # 427,635 M/sec ( +- 0,01% )
7.799.719 branch-misses # 2,94% of all branches ( +- 0,12% )

0,620986301 seconds time elapsed ( +- 0,37% )

****test16-cl5.flac
Performance counter stats for '/tmp/flac-git-opt --totally-silent -f -d /tmp/test16-cl5.flac' (10 runs):
    702,953210      task-clock (msec)         #    1,000 CPUs utilized            ( +-  0,28% )
5 context-switches # 0,007 K/sec ( +- 14,89% )
0 cpu-migrations # 0,000 K/sec ( +- 50,92% )
120 page-faults # 0,171 K/sec ( +- 0,33% )
1.879.136.299 cycles # 2,673 GHz ( +- 0,15% )
4.430.605.638 instructions # 2,36 insn per cycle ( +- 0,00% )
264.470.255 branches # 376,227 M/sec ( +- 0,00% )
7.447.174 branch-misses # 2,82% of all branches ( +- 0,07% )

0,703254931 seconds time elapsed ( +- 0,28% )

****test16-nocomp-cl0.flac
Performance counter stats for '/tmp/flac-git-opt --totally-silent -f -d /tmp/test16-nocomp-cl0.flac' (10 runs):
    993,153306      task-clock (msec)         #    1,000 CPUs utilized            ( +-  0,27% )
4 context-switches # 0,005 K/sec ( +- 12,06% )
0 cpu-migrations # 0,000 K/sec ( +- 55,28% )
102 page-faults # 0,103 K/sec ( +- 0,32% )
2.658.086.321 cycles # 2,676 GHz ( +- 0,25% )
7.457.070.868 instructions # 2,81 insn per cycle ( +- 0,00% )
920.078.916 branches # 926,422 M/sec ( +- 0,00% )
1.298.655 branch-misses # 0,14% of all branches ( +- 0,87% )

0,993540048 seconds time elapsed ( +- 0,27% )
Result summary:
CL0=0.620986301
CL5=0.703254931 +13.2%
CLN=0.993540048 +60%
Wow. That's a surprise.
+60% on the Non-Compressed flac. I'd expected it to be faster than CL0!?!?
I then learned from the flac designer that flac is still running several tasks of the  "decode" process...  

...now on a much larger No-Compression file. 

That for sure can make the difference. 

And that also means:
A Non-Compressed flac doesn't equal a .wav file from its data structure! 
An NC flac still needs to get processed!
And IMO that's pretty much killing the "Non-Compression" case...

Another conclusion: 

Using CL5 is 13.2% slower than CL0 on the decode.
If many of us appreciate a >5% performance increase by a new CRC algorithm,
choosing the right compression level will have a more then relevant impact on the overall decoding performance on top of that.



Wrap-Up

What are the learnings of Part 1 and Part 2 of "flac on steriods" ?

If you look for best performance and highest efficiency you need to have a look at the binary AND the data. In my shown real world scenarios the performance gain adds up to more then 25%. Not too bad.

You can't rely on your distribution or SW package to provide you with high performance
SW. You'd need to compile it yourself for your own platform. 

This exercise also confirms to me that I'm on the right way with my own stuff by using CL0 flacs. 
And also doing the right thing by decoding flacs on my Intel NUC server instead of the RPI doesn't seem to be the worst idea.

And the exercise also shows that .wav from an efficiency perspective is still the preferable format. However. If you'd add the "streaming-load" effect to the equation
wav with more then double the size in comparison does add some extra load on that account. As usual. You need to look for the better compromise.

Since I decode my flacs prior to playback and bulk-store these in a local RAMbuffer I don't need wav, don't face the continuous streaming load issue and can still enjoy the advantages brought to us by flac... 

...and here it comes...  ...all that while not experiencing any impact on perceived sound quality! ;)


Enjoy.


flac vs. sox - the showdown

$
0
0
Today I had the idea to benchmark my steroid pumped up flac and sox binaries for decoding  a flac.

Why is that?

Both apps offer the very same functionality. And are widely used for that job.




On e.g. a Logitechmediserver sox could also be used instead of flac to decode flacs. 

That's actually what I do, because I currently resample the data to 384k at the same time.
Basically instead of feeding sox through flac (default setting)  I let sox do the whole job.

How did I run the test?
After finishing my earlier steroid benchmarks I installed the self-compiled high performance versions of flac and sox on my Ubuntu system.

I use these for the test now. Both are dynamically linked.

For the test I've taken a 16bit C0 test flac.

And that's what I've done:

*********************************************************************************************

echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

Case1:


perf stat -r 10 -B sox -qq --no-dither ./test16-C0.flac -t wavpcm ./test16.wav

Case2:


perf stat -r 10 -B flac --totally-silent -d -f -o ./test16.wav ./test16-C0.flac  

*********************************************************************************************

And here comes the result:

Case1:

Performance counter stats for 'sox -qq --no-dither ./test16-C0.flac -t wavpcm ./test16.wav' (10 runs):

609,739540 task-clock (msec) # 0,999 CPUs utilized ( +- 0,68% )
6 context-switches # 0,010 K/sec ( +- 23,25% )
0 cpu-migrations # 0,000 K/sec ( +-100,00% )
352 page-faults # 0,578 K/sec ( +- 0,33% )
1.633.659.459 cycles # 2,679 GHz ( +- 0,69% )
3.966.964.408 instructions # 2,43 insn per cycle ( +- 0,00% )
507.480.648 branches # 832,291 M/sec ( +- 0,00% )
5.598.974 branch-misses # 1,10% of all branches ( +- 0,08% )

0,610219378 seconds time elapsed ( +- 0,68% )

Case2:

Performance counter stats for 'flac --totally-silent -d -f -o ./test16.wav ./test16-C0.flac' (10 runs):

507,810385 task-clock (msec) # 0,999 CPUs utilized ( +- 0,23% )
5 context-switches # 0,009 K/sec ( +- 24,98% )
0 cpu-migrations # 0,000 K/sec
109 page-faults # 0,215 K/sec ( +- 0,81% )
1.358.493.244 cycles # 2,675 GHz ( +- 0,19% )
2.917.924.459 instructions # 2,15 insn per cycle ( +- 0,00% )
220.421.986 branches # 434,064 M/sec ( +- 0,01% )
5.643.611 branch-misses # 2,56% of all branches ( +- 0,07% )

0,508171327 seconds time elapsed ( +- 0,23% )


********************************************************************************

Interesting result again. flac is around 17% faster than sox for the very same job. 

What's even more interesting is that if you look at the number of executed instructions you'll notice that sox seems to run a million (25%) more instructions for the job. 
Hmmh. That would somehow explain the difference. But what the heck is sox doing there!?!? 


Bottom line

By looking at above I'd say, if you just need the decoding part for a flac and no DSP (resampling/dithering/format conversion etc.) you better use flac only. 
If you need flac decoding AND DSP jobs to be done, better use sox only. 

Perhaps I try to get in touch with the sox designer to see what's going on with sox.

Enjoy.






piCorePlayer 4.0 - released

$
0
0
The piCorePlayer team released version 4.0 around 1st of September. Yep. Already 6 weeks ago.
What you'll find first of all are numerous updates under the hood. 
Great to see that the pCP team still puts many hours on the project to supply a state-of-the-art RPI audio distribution to the community. Thx a lot to the entire team.



Let's have a little closer look at it (that includes a complete pCP 4.0 settings guide targeting  best-in-class performance for a PI based streaming device):




As usual, what really matters to me - is the optimum setup and/or "optimization" perspective. 

The pCP team supports that idea quite well in providing a slim, highly efficient and customizable system and not to forget a well chosen set of up2date software 
packages as base.

From that angle one of the key subjects to look at is this: 

The kernel. Basically only the kernel and squeezelite are making music. 
pCP4.0 comes (still) with a customized kernel version, comprising of several patches,
configuration changes AND the realtime(rt)-patch.
There's afaik no other kernel (or RPI OS) out there offering
 such a high performance 
basis. Great!

However. I mentioned it in earlier blog posts. The rt-kernel is a race horse!
Taming it needs a certain sensitivity and experience. 


I strongly recommend to use the rt-kernel only if you run the RPI as single task
streamer - without LMS - 
and also without any attached USB devices,
such as USB-DAC or HDD. The PIs Achillis' heel is it's very limited USB/ethernet

implementation. You don't want to push it! 

For "multipurpose" environments use the normal kernel pCP version! 

If you don't follow this advise there'll be high risk that things get worse or really
bad (e.g. hard hangs/lock-ups) by running rt in a multipurpose environment. 

******

The pCP team also introduced a CPU isolation setting feature and a CPU affinity
feature. (Thx for listening Paul ;) )  


You'll be able to reserve CPUs exclusively for this or that task, process or interrupt.

Basically what I've been suggesting and explaining to do via the tune script in earlier blog posts can now be done via GUI.

My recommendation:

Inside the "Tweaks" menu you add: 

1.  "0,3" to the "CPU Isolation" field 

that leaves CPU 1 and 2 for all other tasks - which is more than sufficient. 

As soon as you've saved and enabled this setting the affinity fields for squeezelite will appear!

2. The  "Squeezelite CPU" affinity field for the main process you leave empty.
3.  Add "3" to the "Squeezelite Output CPU" field and save it. 






That's it.

4.  Furthermore - do not forget to put squeezelite on "priority""45" in the squeezelite settings
     dialog to actually make use of the elevated rights supplied by the rt-kernel basis.
     Do NOT go higher than 45 (99 would be max) though!!!!
     There are 
several system processes running at around prio 50.
     You don't want to get the system out of balance!  

What are we actually doing here?

The RPI has the unique (not very nice) feature that all IRQs are nailed to CPU0. You can't change this! By isolating CPU0, we isolate all IRQs on CPU0. There'll be no interference with other tasks or processes that would be running on that CPU otherwise.  With CPU3 being isolated squeezelite - our streaming engine part of it -  can run its output thread exclusively, without distractions thus very smooth, on that single CPU.


*******

What's still missing on pCP40 though is the LED-off feature. I brought it up with the team. 
And I know Paul was looking into it. We've been in touch about it.
Somehow he didn't manage to get it in. 
That's a pity.

If you are interested in this there are good news.  I wrote up a little HowTo for the 3B+.


*******

Furthermore, for those running LMS there are good news. flac and sox - the  most important streaming engines (apps) -  have been updated.
These should show now state-of-art performance! ( I havn't verified and updated my flac/sox benchmark 
blog posts in line with the new developments yet)

What IMO remains a valid advise: I think it's a better idea to run heavy DSP tasks on a separate powerful server - the LMS versions for Intel machines have also been updated recently on flac and sox btw! 

******

And... 

...don't get frustrated with numerous "save" actions and "reboots" while configuring pCP. 

The pCP configuration can get a bit annoying from time to time.  Unfortunately nothing has changed in pCP4.0 on that account. 

However. 

What matters in the end - it works and you just have to do it once in a while!
Beside that no other distro offers these many system configuration options.

Let's keep the ball low and do not complain too much. ;)

******

Bottom line. With pCP 4.0 you'll get the IMO hottest and freshly painted RPI audio streaming race horse that's being offered in the market. 


You just need to know how to ride it!  I hope I can contribute a little on that account.


Enjoy.

*******


Annex 1:

Don't forget to add below settings to above setup recommendations to end up with an IMO outstanding performing and sounding RPI based streamer setup:

1. HDMI off




2. Internal audio off



3. Wifi/BT off




4. Fixed clocking scheme - PI2 and 3(+) :





OPTIONAL (I do recommend it!):

5. Under "Tweaks": We stop the pCP webserver (and ssh daemon) after a certain time 
    Below example will grant you 120 seconds on the webserver to apply changes, such as
    disabling 
this tweak again by removing the entry and saving it.  It'll also let you remote
    login via ssh for exactly 125s. You can change the numbers to your liking. But don't go
    below 60s on the first sleep. Doesn't make sense.







     Both killed services are related to networking. With this config done we do lift off potential
     further distraction from the streaming task. 



6. To finish a up a complete setup, I'd like to show you below a way to setup
     e.g. the Allo Boss DAC mixer and overlay...





  
    These tiny and rather hidden settings can have quite an impact!!!

    Note: 
    Toggling the analog output level by 6db means you can switch between 2V and 1V max.
    Usually you want to go with this option turned on.
    Most other DACs run 2V on the output at 0dB or 100%.



7. ... and how to setup squeezelite for e.g. the Allo Boss 
    
    Note: 
    Most settings below apply to pretty much all HAT DACs out there.
    The only thing that differs is the "Alsa volume control" field.
    This field is referring to your DACs internal volume control as being offered by
    the DAC driver. The squeezelite software volume control gets bypassed if this

    field is being used.
    You can lookup available values by clicking the "more" function right beside that field. 










*******************************************************

PS1: 
I'll update all old HowTos according to pCP 4.0 on this blog as soon as I have the Allo Katana DAC at hand.


PS2:
My testsystem:
RPI 3B+ - Allo Boss - Adam A5X speaker - ethernet cabling / NUC server + latest LMS 


PS4:
Here you'll find the main pCP4.0 support thread.



********************************************************************************************************






All lights off - RPI3B+

$
0
0
I just thought it would be nice to share how to get your flashy 3B+ to pretend being
dead - at least on the surface. 

Unfortunately even the recently released piCorePlayer 4.0 doesn't offer such a trivial and nice tweak. (It's not that I havn't been talking to the pCP folks about it. ;) )
That leaves us with one option: Manual commandline intervention! 








Background:
I don't like having flickering LEDs annoying me in a dark listening room.

And anything that is not contributing anything to the way I'm operating the PI I try to turn off as well. 

Bottom line: There's no  way to skip this tweak.

Result:
As you can see above, the PI LEDs are off... 

(...however, the DAC HAT "obviously" does not intend to hide its operational status. ;)  )

At least the tower doesn't look like a flashy Christmas tree anymore. That's nice.
Another sideeffect from this exercise -- it'll save some mA (~2mA).

And what's really new and exclusive to the 3B+ is that the ethernet port 
LEDs also get turned off!



With a 3B+ the way how to turn off the LEDs (as described in earlier posts for other models) slightly changed.

The tweak will be permanent! Unless you refresh/restore your default "config.txt" file.


I'll show you how to accomplish the tweak.  It's actually quite simple. I assume you know how to login via ssh.


Here's what you have to do on e.g. a piCorePlayer installation:

##########################################################

ssh tc@xxx.xxx.xxx.x

PW: piCore

sudo su

mount /dev/mmcblk0p1 /mnt/mmcblk0p1/
cp /mnt/mmcblk0p1/config.txt  /mnt/mmcblk0p1/config.txt.orig

###The next block - down to next hashed line - copy/paste all at once!#####

cat <<'EOF'>>/mnt/mmcblk0p1/config.txt

## Turn off onboard and ethernet LEDs of RPI3B+
# GPIO expander activation 
dtoverlay=pi3-act-led
# Disable the ACT LED
dtparam=act_led_trigger=none
# Disable the PWR LED
dtparam=pwr_led_trigger=none
dtparam=pwr_led_activelow=off
# Disable ethernet port LEDs 
dtparam=eth_led0=14
dtparam=eth_led1=14

EOF


###########################################################


sync
reboot


#######################################################

Note: The same config.txt changes you can apply to Arch Linux or Raspbian based installations with their /boot/config.txt files in a quite similar way!


And that's basically it.
For restoring the original settings just copy the config.txt.orig file back to config.txt.

Good luck with this exercise.

Enjoy.




RPI 3B+ - On Air

$
0
0
With my head still buried in the recent pCP4.0 setup and related potential tuning options, another idea popped up.

It actually happened while getting annoyed about my rather short 20inch ethernet test-cable, which - while pulling at it accidentally - caused a hefty drop of my router followed by a network outage in the house. 

The actual idea was:

What if the onboard Wifi of my shiny brandnew RPi 3B+ would actually be worth to take a closer look at!?!? 







For several years by now, I vehemently ruled out onboard-Wifi for IMO several good reasons:

lower bandwidth
* EMI/RFI. The antenna sits right below the audio HAT
* pretty basic implementation 
* serious extra load on the CPU
* higher OS jitter 
* single Wifi band

And then there were/are the usual environmental downsides of Wifi

* time of the day variances 
* router location 
* Wifi band occupation


Times are changing, while technology is evolving. Earlier made decisions should be revisited once in a while. 

I've seen more than once that you have to challenge earlier 
decisions and experiences.
No challenge - No progress!

What do we have actually on the table now?

First of all.  Many of above Wifi related issues still apply. Hmmh. No good.

OK - Let's call it off! 

...just kidding. There's obviously some progress to be recognized (on paper).



* The new 3B+ is much better engineered  (layout/heat/power/radiation)
* The RPI 3B+ has a much better Wifi chip (Cypress CYW43455) onboard 
* It supports 802.11n/ac - which offers access to the less crowded 5GHz band
* With Wifi we bypass the entire (still rather poor) USB stack incl. ethernet 
* Turning USB/ethernet off might lower power consumption seriously
* Cabling, routers, NW bridges etc. can be removed to a certain extent 
* The galvanic isolation starts onboard
* Several recent generation audio HATs are better protected against EMI/RFI/noise,
   and on top of that there
 are even isolator HATs available 

* The new Cypress chip seems to offload en-/decryption tasks off  the RPI CPU!
* The new embedded antenna from Proant - which basically is etched into the board layers -
   must be considered a real smart high performance design choice (You might want to to
   read this review.)





The trapezoidal shape in above image is the actual antenna. The shielded box below hosts
the Cypress Wifi chip.




Bottom line. All this looks like a pretty promising package to me. 
It's gotta be worth a try. 

************************************************************

Let's do it...


Setup


Within 5 minutes I had it all up'n running on pCP4.0  - with audio rt-kernel and the Boss!
It's a no-brainer. Seriously!







I did figure out 3 issues on pCP40 though - which caused me some extra effort:

1. The country specific setup wouldn't work (Paul from pCP -  probably fixed it by now)
2. Wifi power management can't be turned off in pCP (usually that makes Wifi
    operation more stable)

3. The ethernet can't be turned off via pCP - WLAN and ethernet run in parallel

OK. Nothing we couldn't resolve by another tweak. I communicated to Paul that 2. and 3. would be very useful and very simple features for the next release (latest! ;) ).
For now he promised to fix 1. Let see what happens.


Operation

Let's have a look at the operation. On the first glance everything looks normal! Pretty boring. 

Most important for now is the network performance - obviously. 
I ran several iperf3 performance tests. 

It's sufficient to focus on UDP performance in the context of this blog article.
(If UDP works properly,  TCP works for sure)

UDP is the protocol used by the LMS/squeeze streaming environment on the ethernet.

Here's how to check it out:

On my NUC server (wired Gbit LAN)  I started:
  iperf3 -s -V -A 2
On the RPI:
  iperf3 -c 192.168.1.xx  -V -A 2 -u -b 300m -t 20  


As a 2nd field of potential improvements I looked at shutting down this or that part of the USB stack. The Wifi chip is not using the USB bus at all, nothing actually is using it anymore. It's sitting idle.



Results:  

Test1: 

* 180 degree - "Line of sight",  
   basically the antenna is pointing away from the router (3ft distance)




  137MBit/s, quite low jitter, and no packet losses. That looks good.


Test2: 

* 0 degree - "Line of sight",  

   basically the antenna is pointing directly to the router (3ft distance)


Similar to Test1, performing a bit better on the packet jitter 


I did some more testing. All with similar results. What more do I want!?!? Even much better throughput then all earlier generation RPIs on wired ethernet!


Test3 (see Annex1):

Using hub_ctrl to shutdown USB and ethernet:

Savings: about 60mA

Not that bad!


Test4 (see Annex1):

Shutting down the entire USB stack:

Savings: about 200mA

Wow. That's a lot. Almost 50% down on the entire 3B+ power consumption @800Mhz

This is a new dimension!


Real world setup


After the lab-testing - I introduced the new setup to my real world audio system. 
I'm still using Allo Kali/Piano2.1/Anaview AMS0100 amp and the iPowers + numerous tweaks. (still waiting for the Katana)

First I was impressed by how much stuff - cable, filters, AP, PS - could be removed.

In the living room I use a FRITZ!WLAN 1750E repeater from AVM.
It's configured as WLAN bridge (hooked up to wired GBit-ethernet).
Distance from RPi3B+  to repeater is about 6ft. Not too bad.


My network tests ran in a similar fashion as above...  ....with similar results. Great.

Just to mention it once more: I stream (bulkloaded into RAM buffer )352k8/384 upsampled material from the server. That gives me quite a network load at the beginning of a track.

So far I couldn't make out any issues on that one. 


The grande finale!!!!!!!!!!!!

Sunday night listening tests!! 

To make a long story short. I really liked what I heard. 

As of now I don't see a single reason why I shouldn't use the OnBoard Wifi and the RPi 3B+. 

Not only I got rid of a lot of stuff and gadgets, nope, I also experienced slight improvements on the sound side. That wasn't expected.

It still requires a bit of tinkering though to achieve best performance.

Bottom line. The whole exercise was well worth the effort.

One of the key factors to success is obviously the quality of the WLAN connection.
You should make sure to use a state of the art router or repeater in rather
close distance to the RPI to end up with a very high quality (5GHz) connection. 


Enjoy. I really hope you do.


#######################################################################

Annex 1:

Below I'm listing the tweaks supporting and enhancing  above setup:

1. Killing the ethernet processes and bringing the ethernet down:


 pkill -f eth0  && ifconfig eth0 down


2. Turn off WLAN power management

 iwconfig wlan0 power off


3. Turn USB ports and ethernet off with hub-ctrl (saves about 60mA)

 ### Only valid for 3B+ !!!!

 #ETH0 = off
 hub-ctrl -h 0 -P 1 -p 0

## I realized today (30thOct) that below mod causes a reset of the hub chip
## which after that goes back to normal
##  For the time being this tweak is no longer recommended! 

#USB = off
 hub-ctrl -h 1 -P 2 -p 0


The Killer Tweak

4. Turn off the entire USB stack on 3B+ (saves about 200mA!) - you can skip 3.
  
   echo 0 | tee /sys/devices/platform/soc/3f980000.usb/buspower >/dev/null











SoX on steroids - Part 3

$
0
0
Recently (around mid of October 2018) the Logitechmediaserver (LMS) binaries for sox and flac have been upgraded.

Nice. People are listening. (I know they are. ;) )

Remember. These binaries are basically the LMS engines when it comes to audio streaming. As soon as the audiostream leaves these programs it'll hit the network.
That's why these are all but irrelevant for a high performance system!
It is time now to see what the LMS folks have done. I mean, the situation around the old LMS binaries was absolutely unacceptable.

There has been a factor 8 between my own compiled binary and the one supplied with LMS
on my NUC platform. On the RPI 3B the exercise took that LMS sox binary almost 27 minutes! And not to forget the testfile is around 388s long! That's simply not been working. 



Let's have a look at the current situation.

Luckily the new LMS binaries can be found on github. There's no need to upgrade the whole LMS installation. Just extracting the binaries will work.

Just to recap.

During the benchmarking exercise in May I was offline upsampling a flac testfile to 352k8Hz. I've been running the exercise on my i5 Broadwell NUC (and later on the RPI).

Let's have a look at the benchmarking on the NUC today.

In May it took around:

the LMS binary          112s  
the Ubuntu binary      38s 
and my own binary    15s

No comment.

FastForward to October:

I'm running the same tests as in May using the same test file. However. 
I meanwhile upgraded Ubuntu to 18.10, the kernel is now Ubuntu mainline 4.19-low-latency and the Gcc compiler arrived at version 8.2.

Now. That's the job:

***************************************************************************
SRC="rate -v -b 95.0 -p 50 -a 352800 "
DITHERMODE="dither -S"
IF="/tmp/test.flac"

for i in sox-opt sox-lms sox-ubu; do
   BIN="/tmp/$i"
   OF=/tmp/$i.flac
   echo "****************************"
   echo "Binary = $BIN"
   time $BIN -t flac $IF -t flac -C 0 -b 24 $OF $SRC $DITHERMODE
   sleep 2
   rm $OF
   echo 

done
************************************************************

Ah. I'd quickly like to show you that the binaries are all different:

******************************************************************************
-rwxr-x--- 1 root root    239104 Okt 31 13:58 sox-opt
-rwxr-xr-x 1 root root  2209040 Okt 31 16:25 sox-lms
-rwxr-xr-x 1 root root    71976 Okt 31 16:55 sox-ubu
*************************************************************

The LMS binary is compiled statically. Everything it needs is inside 
its trunk. That makes it quite heavy. My own binary is bigger than
the Ubuntu binary because it's processor specific compiled. This supposedly 
can make files bigger. At this point though I'm not 100% sure what's causing 
the difference in filesize.

Anyhow.

Here comes the result:


****************************
Binary = /tmp/sox-opt

real0m14,697s
user0m14,447s
sys0m0,249s

****************************
Binary = /tmp/sox-lms

real0m30,177s
user0m29,949s
sys0m0,227s

****************************
Binary = /tmp/sox-ubu

real0m14,881s
user0m14,580s
sys0m0,301s

****************************

There've obviously been changes. And not just on the LMS side!
Now you might understand why I showed you the filesizes of the 
binaries earlier. The Ubuntu binary performance was kind of a surprise to me.
I simply had to check myself if I got the binaries right by looking at the result.


Bottom Line:

Great. The LMS team managed to get a big step forward. About factor 3 better.
They basically achieve the level that was achieved by the Ubuntu binary in May.

On the other hand it's interesting to see that the stock Ubuntu binary also took a big step forward. It's now as fast as my own binary. Great!

@Paul and Ralphy from pCP.
Great progress. However. I seems though, you're still not there yet. 
Even the Ubuntu binary is 100% faster than the LMS binary. Let see if you manage to squeeze a little more out of it.
I'll be watching.

Enjoy.





piCorePlayer 4.1 released

$
0
0
Yesterday pCP was stepped up to release 4.1.

It's a basic maintenance release.

I'd strongly recommend to run the update asap. 

Find more info and a little update HowTo below.



Here's the 4.1 changelog as issued by pCP :


  • Kernel 4.14.81
  • AudioCore kernel 4.14.81-rt47. All RPi processors supporting Realtime Kernels.
  • RPi Firmware 2018/11/12
  • Support for RPi3A+ Board
  • Upgrade to Busybox 1.29.3
  • Fix for partition sizing , should resolve >32GB sdcard issues.
  • Wifi: Added driver for rtl8822bu chipset
  • Wifi: Correction for "=" in passwords.
  • Wifi: Add CRDA for proper setting of Country code.
  • Update to Squeezelite v1.9.0-1121-pCP
  • Integrate gpio-poweroff and gpio-shutdown overlays. (Easier support for power on/off boards)
They also added introduced minor changes under the hood.

Nothing that heavily affects anything related to the setting or tweaks I'm suggesting on this blog.

For those who run Wifi, the fix to get the proper country code assigned can have a slight 
impact on the WLAN performance. But that fix is a must!

Support for the RPI 3A+ is also new.

However. I'm not going for a 3A+ for several reasons

1. CPU and WLAN is the same technology 
2. 500MB RAM only
3. We can turn off USB and ethernet of the 3B+ anyhow.
4. If we'd need USB/ethernet, we can use it on a 3B+! 
5. And I'm also wondering how you want to configure your WLAN access without
    monitor or ethernet access. You'd need at least a monitor/keyboard or a
    ethernet usb-dongle to be able to configure the WLAN.

    The pCP team, being aware of this issue, has written up a little workaround.
    You'd basically mount 
the pCP image on another computer and would have to edit
    the WLAN related stuff manually.
    That's not really a killer issue.




Now. Let me show you know how to run the update.

UPDATE (insitu)

There's no need to reinstall pCP for 4.1 from scratch. You can just run an update.

1. Change to "Beta" mode
2. Push "Update pCP" and follow the process





There are some little things to look at after the upgrade though:

Not all settings were kept! 

1. You need to redo the underclocking settings
2. You need to redo the manual LED-off tweak
3. You need to disable internal audio
4. Disable HDMI was deactivated
5. You need to push save on the "CPU Isolation" field to see the squeezelite CPU affinity
    settings. Luckily the earlier settings are kept.

Some settings of the config.txt/cmdline.txt  (BIOS) get overwritten.


The custom-squeezelite binary, if you built one earlier, is kept.



That'd be it so far.


Enjoy.



Blog - Under Construction !!!

$
0
0
I'm currently rewriting, restructuring and updating my blog and articles.  

  
There'll be a little chaos here and there.
I didn't want to take everything down while I'm working on it.

I think I'll be done with most changes in about 2 weeks from now, around Apr-28th.

I appreciate your patience.


Shoot the Trouble -- USB Audio Interfaces

$
0
0

With all the very interesting Raspberry Pis and other ARM devices around, Linux becomes more and more interesting for many people. Great audio transports can be build at 100$.
Not to forget. Tablet and Phones are mainly Androids and that is just another Linux, using the same soundlayer (Alsa) then all other Linuxes.

Manufacturers usually still do not commit to support Linux or Android properly.
Which is insane. The vast majority of mobile device out there are Androids.

However. Many devices work or partially work under Linux, because manufacturers comply to general USB Audio Standards (UAC1/UAC2). Meanwhile even Pro Audio companies like RME offer a "Class Compliant" mode for their newest generation of USB devices. (In the RME case they do officially focus on OSX though.)

Anyhow. Even though things are getting better, there are still plenty of  cases where you'll experience NO SOUND.

That doesn't necessarily mean that your device won't work under Linux.

This article outlines a little guideline for troubleshooting your USB audio interface under Linux.
It should give you certain hints what to look for.

However. Just to make it clear from the very beginning.

I won't support anybody, who got issues with his interface!!! Checkout Google or the community.

If you have comments for improving the article please let me know.





 Many people do have very limited Linux knowledge and give up very quickly if their interface doesn't
 work after turning on the Linux  machine.

Let's start.

Checkout following points.



1. Clarify if your device is UAC2 or UAC1 compatible.
    Ask your dealer, the manufacturer, the community.
    If it is not UAC compatible, you'll need a dedicated Alsa driver. Alsa is the Linux soundlayer.
    Check if there is a dedicated driver for Linux or Android available.
    If this is not the case you won't get the interface up'n running.

    Beware:
    E.g. The Audioquest Dragonfly pages do not talk about Linux support. They wouldn't ever do so.
    However. The device is working under Linux quite well.

    Advice: If you intend to purchase a device. Make sure it is at least UAC2 compatible.
    Don't let you fool by the feature list or forum feedback about great sound or similar.
    Get an interface that works on all platforms.


2. Use the latest Linux distribution you can find. Ubuntu is usually your best bet.
    The community and community support is huge.
    Key is that you've got a pretty up2date kernel installed, which usually delivers the newest audio driver.
    Ubuntu is usually half a year behind the generic kernel development. It's not that bad.
    ArchLinux, another distribution, usually comes with the most up2date SW.
    But  ArchLinux is not recommended for users without in-depth Linux knowledge.
 
   You might install Ubuntu on a stick, just to test if your device gets recognized.




3. Now I'll post some commands, that you issue in a Linux Terminal. Please open a terminal (usually
  the shortcut  "CTRL-ALT-t" should do)


3.1 List all recognized audio devices:

(type commands without #)

# aplay -l


or


# cat /proc/asound/cards


3.2 Check all recognized USB devices

# lsusb

or more in-depth

# lsusb -vv


3.3 Check if the kernel module (which is the driver) is loaded

# lsmod | grep snd_usb

You should see "snd_usb_audio" listed 


You might get a long list. Scroll back. You might find your device listed.


If you can't find your interface at this point. It is most probably not supported.

If you can see it. You might face a setup issue. Continue.


3.3 Check alsamixer settings

Some soundcard controls are set to 0 by default (e.g. Dragongfly onboard volume control).


First find out your soundcard number:

# cat /proc/asound/cards

All listed audio interfaces are indexed, starting with 0 and continue with 1,2,3,4.
Memorize the index that belongs to your interface.

Now we open alsamixer. Replace "INDEX" with your index  number.

# alsamixer -c INDEX


Use the arrow keys to navigate. ESC to exit alsamixer.




4. Soundcheck


There is nice little Alsa utility called speaker-test. It just generates noise.
That's sufficient for testing purposes. Just run

speaker-test -D plughw:1,0 -c 2

Replace "1" with your card index.  -c 2 stands for two channel test. You'll experience a continuous switching left/right channel noise.

Note: Turn your volume control down before you start this test!!






Still no sound...

....let me know.


5. Dropouts, Pops, Clicks, Xruns,.....

There are many setups where users experience nasty XRUNS.
Usually these are caused by buffer underruns.
That means your DAC requires more data then your PC is willing or able to deliver. That's usually caused by competing processes on the same machine.

The first measure is to increase your Alsa output buffer sizes. Usually you'll find
a parameter in the settings menu of your preferred app.

If we take squeezelite as example you'll have 20ms buffer by default.

You might try to increase this setting by adding

-a 40:4::1

or larger

-a 80:4::1

or larger....

As a 2. measure you can try to increase the task priority.

With squeezelite that would be using option -p e.g.

-p 85

Note: 99 is max. The higher you get, the higher the chance that you lock up the system.


Still XRUNS??? ..

....Let me know

To be continued...















Audio meets IoT (DIY)

$
0
0
I've been running RPIs and (HAT) DACs for years by now. One thing's been bothering me all the time. 

The powering scheme. The handling of it to be exact. I've still been pulling plugs all the time for that very purpose. Pulling plugs in a lab is one thing, doing that in a living room is simply not acceptable on the long run. How do you tell your wife or kids how to turn on that DIY monster-stereo !?!? Push that plug, next this one and finally that one. Simply not possible.

I finally thought to get some Wifi controlled mains plugs in place. Neither infrared nor bluetooth controlled devices made it on my wishlist.

Let see what IoT (internet of things) projects are offering...




OK. You'll find all kind of Wifi controlled IoT plugs over at Amazon for around 10$/€ each nowadays. 

E.g. I own and use some of these:





The main issue I have with these kind of devices and that becomes an absolute NoGo to me is:

The stock software. ( or better firmware) 

First, many of these devices force you to open up whatever account in the cloud, 
connect to whatever cloud site, transfer whatever data. 
Then, what about security !?!? Hmmh. No idea. Grey zone.
Finally, maintenance !?!? A ten bucks device, maintenance !?!? You gotta be kidding.

From my perspective IoT - done that way - without me. I'm out.


OK. OK. Calm down. There are solutions.
Luckily I stepped over a nice project. It's called Tasmota. This opensource firmware project 
is based on the fact that many of these cheap IoT devices have the same MCU chip family inside:

The ESP8266 MCU from chinese manufacturer Espressif. 

The model ESP-12E is the one we're looking at. It comes with 4 MiB flash memory. Yep. I know. There's a newer ESP32 MCU out there. IMO that's overkill. It's not needed for controlling a couple of relays. One very important subject: The ESP8266 is very well documented and supported.

You can easily flash (program) these ESP8266s with Tasmota firmware via serial interface.  
With Tasmota on the device you'll get full control over your device. 
No need for online accounts, there's (potentially) no sniffing, and it's well maintained. 

You can usually flash these 10$ mains plugs if there's a ESP8266 inside. Many of them come with ESP8266 inside. And the one shown above also has that chip inside. On the Tasmota pages you'll find reference lists.
Flashing these mains plugs requires  some effort. You need to open the case. Then you have to solder 5 wires (VCC/GND/RX/TX/IO-0) to the board to be able to set up a serial link to your PC. That serial link you connect to the PC through a USB to serial adapter... 

OK. That's NOT what I'm gonna describe in this article. (There are several HowTos on the net)




What I'm going to show is how to get something like below going:






What I'll describe is the setup of a so called NodeMCU style proto board. 
These have everything onboard. The ESP8266 MCU,  a USB interface (for initial programming), a USB to serial converter (UART) , Wifi incl. antenna, regulators. 
You just have to connect these boards to a PC and start the programming.

Later on you can attach e.g. a relay board as shown above. Done.

And all that runs from a single 5V supply. 

Sounds easy!! 

It actually isn't. ;)

Otherwise I wouldn't write all this. It took me quite a while to figure it all out. Most info on the web is very fragmented. 
However. Once you know how it's done, e.g. by following this guideline, you'd get things done rather easy though! 

Obviously first you need to get yourself one of the proto boards -- if you really intend 
to get into all this.


*************************************

Disclaimer:

Don't blame me if something goes wrong! As usual, no guarantees from my side. 
All you do, you do at your own risk. And I also can't guarantee that all these boards
out there work the same way as described in this article. 
With this blog post I'll give directions. Nothing more and nothing less. Don't expect any further support from my side.

And be very careful while working with high voltages or batteries! You better know what you're doing.

****************************************


Let's continue. Sourcing first.

Search ESP8266 over at Amazon. Numerous NodeMCU proto boards will pop up.

Minimum feature set:
  • ESP-12E
  • USB interface onboard
  • NodeMCU style
  • CP2102 serial converter

I'd go for the one with the most favorable comments. Most boards look very similar. And that's what they are. Very similar.
There are quality issues reported on this or that board. Usually soldering issues and poorly soldered GPIO headers.
I prefer the boards without presoldered GPIO header. If I need the headers I can solder them myself. Usually I just solder wires straight to the board.

I also ordered a breadboard, jumper cables (male/male and female/male) and a dual-relay board. Actually I bought 3 relay boards and 3 ESPs. That really brings the average cost down. 

Advise: Before powering up the board - inspect the board thoroughly. Check  for soldering issues and obvious production related quality issues and dirt.


Tasmota Initial flashing 

The ESP board needs some digital food first. I will now explain how to get these little guys programmed via RPi running Raspbian. 

That's pretty much the first task. For the initial flash process the ESP8266 has just to be connected and powered via its USB port. You simply attach a uUSB cable. 
And that'd already be it from the initial HW setup perspective.








Why RPi? RPi because most of you will have an RPi around.  Raspbian because it's IMO simply the preferred generic OS of choice for a RPi. On RPi platforms that use Arch Linux as base the process can quite easily adapted. Of course you could also use e.g. Moode Audio for the job, which is based on Raspbian.

I think the actual flash process can be executed this way in a pretty straight forward manner, even by people with close to no Linux skills. 


StepByStep Tasmota flash instruction 


This phase describes the initial flash procedure. Only the initial Tasmota flashing needs to be done via RPi. Further updates can usually be done through the Tasmota firmware supplied web interface later on. 

As prerequisite, you need to have  ssh enabled on your Raspbian installation.   


If you - at this point - don't know what I'm talking about or you are afraid of entering this or that command from a commandline (you don't have to) better skip this little project.



OK. Let's get it done.

#######################################################
###
### Power up and boot the RPi WITHOUT ESP8266 attached
###
#######################################################

### ssh login to Raspbian 

### become superuser
sudo su

### update packages
apt-get update && apt-get -y upgrade

## check Debian python version ( some other systems might run incompatible python 3.x)
python --version

Python 2.7.16

### install python package manager - pip
apt-get install python-pip

### install ESP programming tool
pip install esptool



### download latest firmware binaries from Tasmota to the RPI /tmp directory
### you'll find the files over at: 
### https://github.com/arendst/Tasmota/releases
### choose the minimal version and your country specific version
###
### you could also simply copy/paste below 3 commands
### The first line identifies the most current Tasmota version. 
### The next downloads the minimal image
### and the final command downloads the english standard version

VERSION=$(curl -s https://github.com/arendst/Tasmota/releases | grep "Version" | head -1 | awk '{print $2}')

wget https://github.com/arendst/Tasmota/releases/download/v$VERSION/tasmota-minimal.bin -P /tmp

wget https://github.com/arendst/Tasmota/releases/download/v$VERSION/tasmota.bin -P /tmp


### at the time of writing Tasmota version 7.2.0 is the latest version
### check if the download went OK.

ls /tmp/*bin

/tmp/tasmota.bin  /tmp/tasmota-minimal.bin

### you see two files?  Great. We're on a good way!


### clear dmesg buffer
dmesg -C

########################################################################
###
### NOW attach the ESP8266 board to a RPi USB port 
###
########################################################################
dmesg


[   97.461874] usb 1-1.4: new full-speed USB device number 4 using xhci_hcd
[   97.568300] usb 1-1.4: New USB device found, idVendor=1a86, idProduct=7523, bcdDevice= 2.54
[   97.568316] usb 1-1.4: New USB device strings: Mfr=0, Product=2, SerialNumber=0
[   97.568328] usb 1-1.4: Product: USB2.0-Serial
[   97.577059] ch341 1-1.4:1.0: ch341-uart converter detected
[   97.581746] usb 1-1.4: ch341-uart converter now attached to ttyUSB0


### Looks similar to your? That's great. 

### What just happpend? The ESP device is recognized and UART converter attached
### to ttyUSB0 - the serial interface device on Raspbian that gets assigned to the board
### UART.  ttyUSB0 is the interface on your RPi that's being used for flashing. 

### If ttyUSB0 or similar ttyXXX is not shown - the UART somehow gets not recognized
### it could be a hardware issue - or a cable/connection issue
### or some config issue
### see e.g. https://www.raspberrypi.org/forums/viewtopic.php?t=160400

########################################################################
###
### YOU NEED TO STOP AT THIS POINT IF ttyUSB0 is not shown.
###
########################################################################

otherwise continue....

################################################
###
### now the environment is set
### next we can start programming the ESP
###
################################################

You're still logged in via ssh and you still have root user permissions at this point!

### 1st a ESP communication test
esptool.py --port /dev/ttyUSB0 read_mac

esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting....
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 84:f3:eb:e3:8e:92
Uploading stub...
Running stub...
Stub running...
MAC: 84:f3:eb:e3:8e:92
Hard resetting via RTS pin...

### above output looks similar to yours?? Great. The ESP talks to us. Go ahead.

### erase flash first
esptool.py --port /dev/ttyUSB0 erase_flash

esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting....
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 84:f3:eb:e3:8e:92
Uploading stub...
Running stub...
Stub running...
Erasing flash (this may take a while)...
Chip erase completed successfully in 9.6s
Hard resetting via RTS pin...

### above output similar to yours??  Great. Now we've got a clean base.  Go ahead.


### Finally: flash Tasmota firmware

### flash minimal image first as precaution 
### (a memory issue might occur if flashing the actual large image first)
### the process will take a while. The tool shows a % progress counter.

### flash minimal image (below command goes in one line!)
esptool.py --port /dev/ttyUSB0 -b 115200 write_flash --flash_freq 26m --flash_size 1MB -fm dout 0x0 /tmp/tasmota-minimal.bin


esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting....
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 84:f3:eb:e3:8e:92
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Flash params set to 0x0321
Compressed 382304 bytes to 269164...
Wrote 382304 bytes (269164 compressed) at 0x00000000 in 23.8 seconds (effective 128.5 kbit/s)...
Hash of data verified.

Leaving...
Hard resetting via RTS pin...


### above output similar to yours?? Nice. Go ahead.


### flash standard image (below command goes in one line!)
### replace "tasmota.bin" with your chosen country specific image name
esptool.py --port /dev/ttyUSB0 -b 115200 write_flash --flash_freq 26m --flash_size 1MB -fm dout 0x0 /tmp/tasmota.bin


esptool.py v2.8
Serial port /dev/ttyUSB0
Connecting....
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 84:f3:eb:e3:8e:92
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Flash params set to 0x0321
Compressed 581776 bytes to 400933...
Wrote 581776 bytes (400933 compressed) at 0x00000000 in 35.5 seconds (effective 131.2 kbit/s)...
Hash of data verified.

Leaving...
Hard resetting via RTS pin...


### above output similar to yours?? Wow. Tasmota is finally loaded!


### power down the RPi (job done) and disconnect the ESP8266 from the RPi 

### You just flashed your first ESP8266. Congrats!





Next comes: 

####################################################################

The Tasmota network setup

### to execute this step you'll need your home Wifi network ID (ssid) and Wifi password.
### go, get it now!

### attach the ESP8266 to any 5V uUSB power source now
### that'll power up the ESP8266

### Tasmota needs to know where to connect to. 
### For the very initial Wifi setup, Tasmota, and that it'll do only once !!!, boots as 
### Wifi AP (accesspoint). 
### Tasmota then offers an initial network config web-interface.
### that one allows you to enter your own home Wifi network credentials

### how to access that initial Tasmota AP ??
### you can e.g. use your mobile phone to connect to the Tasmota AP / Wifi network
### lookup Wifi networks on your phone 
### you'll find a new network called ""Tasmota-3730"".  connect to it.

### ignore a potential "no internet access" message of your phone once connected

### now open your favorite browser
### enter:  "192.168.4.1"
### the initial Tasmota network config screen shows up 
### enter your wifi home network ssid
### enter your wifi password (make password input visible by clicking the box)
### Note: Entering wrong data at this point, will make the Wifi access to your network fail, 
### and that will require you to redo the whole flash procedure!!!

### "save" your inputs!

### after "save" the ESP restarts and connects to your home network
### you can connect your phone to your home wifi network again

### find out the new IP address that's been assigned to your ESP board (e.g. via router,
### nmap, ???)
### while you are on the router, reserve the IP address for your ESP. You
### don't want to get that IP address randomly changed once in while!

### again, if the ESP won't show up, something went wrong with your 
### Wifi credentials and you have to redo the programming step.

### NOTE: Only 2.4GHz Wifi networks will work!


I hope everything went fine up 2 here.


The Tasmota Setup


### In this example I'll show you how to setup a 2-relay board.

### enter the new ESP8266 IP network address into any browser in your home network
### to get access to the Tasmota web server.
### once connected, you'll initially see this:





### we need to change this.

### Enter the Configuration menu





### Enter Configure Module






### Select "Generic (0)" from the pulldown and "Save"

### Now the ESP will restart
### Once that is done go back to the module menu






### As you can see the layout changed.
### And you also see that I already changed the D1 and D2 outputs 
### to Relay with inverted (i) logic.
### It needed to be inverted logic to make the relays work properly! (on my relay board!)
### save once done with this configuration. The ESP will restart again.







### And now you have two relays to work with.
### You could now connect a relay board with e.g. IN1 to D1 and IN2 to D2.


### But first we gonna change the name of your ESP
### We do that in the "Configuration/Configuration Others" menu






### you can disable MQTT for now and enter "Friendly Names". 
### save and the ESP will restart again.

### Wow. What a journey.
### We're almost done.

### Now we need to configure the default "PowerOnState" of the relays.
### At ESP8266 PowerOn the relays shall remain OFF.

### This config can be done only via console (Main Menu: Console), as shown below





### You enter "PowerOnState OFF" in the command field and press return.

### ADVISE:

### Never attach a Relay board to the ESP 5V pin and use ESP attached USB power.
### This will fry your ESP! (Why I know that??? Don't ask.  ;)  )

### As usual. Everything you do, you do at your own risk. If you work 
### with high voltages or batteries be very careful!

### Run a few test hours on your setup to see if everything runs stable.
### There are boards that might not be of highest quality!


If you have questions or improvement proposals, we can discuss it over at DIY-Audio.
For further an more in-depth info I refer you to the Tasmota pages.


Here's my ESP8266 setup hooked up to the two DC rails of the Allo Shanti power supply, powering my RPi 4 and Khadas Toneboard:






One major point is still missing. 


Tasmota Control

There are several ways of controlling your new setup

  • Web Browser - you just enter the IP address of your ESP8266
  • Android - "Tasmota" app
  • iOS - Apple "shortcut"  app
  • command line - curl
  • and more

Basically the ESP8266 gets controlled via web or http request commands. 
The Tasmota command list and descriptions you'll find over here.

Beside controlling your ESP via web-browser, you'll find an Android app called "Tasmota".  It's very basic. But it's working.

On iOS  you can define direct interaction with the ESP by using the Apple Shortcut app.
Apple Shortcut can handle so called http requests. If I find time I might describe how that'd be done. It's rather simple. You can add just the http request, very similar as shown down in the Annex.


Very powerful is using the commandline utility "curl". I'm showing some examples down in the Annex. 



Tasmota Maintenance


Tasmota is a work-in-progress project. Updates pop up regularly. Keep an eye on that!
You can load these via the Tasmota WEB-GUI.

Advise1:
I prefer do download the .bin files on my PC. I  do not use the Tasmota Update-Over-Webserver function.

Advise2: 
Before you run the update via web-GUI, make a backup of your Tasmota config.
In case something gets overwritten you can simply reload it.

Advise3:
I e.g. do have the issue on my Gosund power-plugs that I first have to load the minimal image before I can upload the normal image. Keep that in mind if an upload/update fails!

Advise4:
Make sure the update over Wifi runs over a stable Wifi connection. You might want to get
your Tasmota device closer to the router/AP before running the update. 




Final words

There'd be a lot more to talk about. All this IoT stuff is a huge playground. 

I do hope that most people get a nice headstart with above article.


Enjoy.


#########################################################################


ANNEX1

curl control examples


To control your ESP environment from a linux commandline or script ""curl"" can be used.
That's IMO a very powerful way of handling the IoT environment.
Below I show some examples how curl can be used in this context:

IP=192.168.x.xxx

##status relay d1
curl -s http://$IP/cm?cmnd=Power1

##on relay d1
curl -s http://$IP/cm?cmnd=Power1%20On

##off d1
curl -s http://$IP/cm?cmnd=Power1%20Off

##on d2
curl -s http://$IP/cm?cmnd=Power2%20On

##off d2
curl -s http://$IP/cm?cmnd=Power2%20Off

## Using the "backlog" command allows to chain up commands

## turn d1 on and 10 seconds later turn d2 on 
curl -s http://$IP/cm?cmnd=Backlog%20Power1%20On%3BDelay%20100%3BPower2%20On

## turn d2 off and 3 seconds later turn d1 off.
curl -s http://$IP/cm?cmnd=Backlog%20Power2%20Off%3BDelay%2030%3BPower1%20Off


Note: delay/s to be given in x seconds * 10

## I can use this "chain-and-delay"  method to e.g. turn on my DAC first, followed by the RPi and finally the amp
## by using a single button. 
## turning these devices OFF, would go the other way around. for this there'd be a second button.

Note: "%20" are spaces and "%3B" are semicolons in above examples. It has to be done that way!









Shoot the Trouble -- USB Audio Interfaces

$
0
0

With all the very interesting Raspberry Pis and other ARM devices around, Linux becomes more and more interesting for many people. Great audio transports can be build at 100$.
Not to forget. Tablet and Phones are mainly Androids and that is just another Linux, using the same soundlayer (Alsa) then all other Linuxes.

Manufacturers usually still do not commit to support Linux or Android properly.
Which is insane. The vast majority of mobile device out there are Androids.

However. Many devices work or partially work under Linux, because manufacturers comply to general USB Audio Standards (UAC1/UAC2). Meanwhile even Pro Audio companies like RME offer a "Class Compliant" mode for their newest generation of USB devices. (In the RME case they do officially focus on OSX though.)

Anyhow. Even though things are getting better, there are still plenty of  cases where you'll experience NO SOUND.

That doesn't necessarily mean that your device won't work under Linux.

This article outlines a little guideline for troubleshooting your USB audio interface under Linux.
It should give you certain hints what to look for.

However. Just to make it clear from the very beginning.

I won't support anybody, who got issues with his interface!!! Checkout Google or the community.

If you have comments for improving the article please let me know.





 Many people do have very limited Linux knowledge and give up very quickly if their interface doesn't
 work after turning on the Linux  machine.

Let's start.

Checkout following points.



1. Clarify if your device is UAC2 or UAC1 compatible.
    Ask your dealer, the manufacturer, the community.
    If it is not UAC compatible, you'll need a dedicated Alsa driver. Alsa is the Linux soundlayer.
    Check if there is a dedicated driver for Linux or Android available.
    If this is not the case you won't get the interface up'n running.

    Beware:
    E.g. The Audioquest Dragonfly pages do not talk about Linux support. They wouldn't ever do so.
    However. The device is working under Linux quite well.

    Advice: If you intend to purchase a device. Make sure it is at least UAC2 compatible.
    Don't let you fool by the feature list or forum feedback about great sound or similar.
    Get an interface that works on all platforms.


2. Use the latest Linux distribution you can find. Ubuntu is usually your best bet.
    The community and community support is huge.
    Key is that you've got a pretty up2date kernel installed, which usually delivers the newest audio driver.
    Ubuntu is usually half a year behind the generic kernel development. It's not that bad.
    ArchLinux, another distribution, usually comes with the most up2date SW.
    But  ArchLinux is not recommended for users without in-depth Linux knowledge.
 
   You might install Ubuntu on a stick, just to test if your device gets recognized.




3. Now I'll post some commands, that you issue in a Linux Terminal. Please open a terminal (usually
  the shortcut  "CTRL-ALT-t" should do)


3.1 List all recognized audio devices:

(type commands without #)

# aplay -l


or


# cat /proc/asound/cards


3.2 Check all recognized USB devices

# lsusb

or more in-depth

# lsusb -vv


3.3 Check if the kernel module (which is the driver) is loaded

# lsmod | grep snd_usb

You should see "snd_usb_audio" listed 


You might get a long list. Scroll back. You might find your device listed.


If you can't find your interface at this point. It is most probably not supported.

If you can see it. You might face a setup issue. Continue.


3.3 Check alsamixer settings

Some soundcard controls are set to 0 by default (e.g. Dragongfly onboard volume control).


First find out your soundcard number:

# cat /proc/asound/cards

All listed audio interfaces are indexed, starting with 0 and continue with 1,2,3,4.
Memorize the index that belongs to your interface.

Now we open alsamixer. Replace "INDEX" with your index  number.

# alsamixer -c INDEX


Use the arrow keys to navigate. ESC to exit alsamixer.




4. Soundcheck


There is nice little Alsa utility called speaker-test. It just generates noise.
That's sufficient for testing purposes. Just run

speaker-test -D plughw:1,0 -c 2

Replace "1" with your card index.  -c 2 stands for two channel test. You'll experience a continuous switching left/right channel noise.

Note: Turn your volume control down before you start this test!!






Still no sound...

....let me know.


5. Dropouts, Pops, Clicks, Xruns,.....

There are many setups where users experience nasty XRUNS.
Usually these are caused by buffer underruns.
That means your DAC requires more data then your PC is willing or able to deliver. That's usually caused by competing processes on the same machine.

The first measure is to increase your Alsa output buffer sizes. Usually you'll find
a parameter in the settings menu of your preferred app.

If we take squeezelite as example you'll have 20ms buffer by default.

You might try to increase this setting by adding

-a 40:4::1

or larger

-a 80:4::1

or larger....

As a 2. measure you can try to increase the task priority.

With squeezelite that would be using option -p e.g.

-p 85

Note: 99 is max. The higher you get, the higher the chance that you lock up the system.


Still XRUNS??? ..

....Let me know

To be continued...















Introducing: The Audio Streaming Series

$
0
0
"Raspberry Pi - The audio engine" series finally got restructured, updated 
and hopefully slightly enhanced. 
"Enhancement" of course depends on the readers perspective and expectations. ;)

The series is now called: "The Audio Streaming Series"

And can be found at the top of the blog.



The main intention is to have it all at one spot. 

The goal is to make it easier for readers to view the entire project at a glance 
and to follow it step-by-step more easily.

The whole project still is and probably always gonna be work in progress.
It'll reflect the outer world developments and my take on it.

I know. Blogger is not the greatest of all blogging platforms. 
It has it's limitations, many of them. It's free though. 

I'd love to move on to another hosting site, setting up a nice word-press blog. 
But hey. This all would cost even more time and money. 

For now I'll keep it as and where it is.


Enjoy.

........................................................................................................

Raspberry PI - I2S-HATs @ 384k

$
0
0
Today I'd like to share how to introduce 352k8/384k upsampling 
by running LogitechMediaServer as server and squeezelite as a client.

This post, from a hardware perspective,  pretty much relates to my 

RPI I2S HAT DAC projects I've been running over at DIY-Audio.

Some DACs (TI PCM51xx family) have shown a slightly improved performance 

while running upsampled material. That's the main reason for writing all this up. 







To begin with: The differences that I've been experiencing havn't been earth-shattering IMO. Still these give the whole presentation an IMO worthwhile enhancement.  



I'm pretty positive that this exercise is well worth a try.


###############################################################

Note: Before you start!


Make sure your OS of choice  supports 352k8/384k samplerates for I2S HATs. 


If you do not run e.g. PiCorePlayer with the advanced audio kernel, 

this exercise will fail! The standard RPI kernels won't support all samplerates.


By default the Raspberry PI I2S is limited to 192kHz.
There are just a very few distributions that offer full up2 384k I2S support.
The kernel and drivers need certain non-standard patches to support
these high samplerates through I2S. (USB-DACs don't have any issues with these rates !) 

And then not every I2S (HAT) DAC can handle 
 these high samplerates from a HW (quality) perspective!

There were even cases where different DACs from the very same brand
have been working and others simply failed! 

Bottom line. Make sure your DAC theoretically supports it, before you start your journey!


What I do know is, that my Allo Boss DAC works rock solid on up2 384KHz.


I did also experience that these high samplerates do not cause any advantages 

on certain DACs, such as the Sabre ES9023.  For these DACs this exercise IMO is  
waste of time. 
You might still want to continue to read  this article, since I also added some side 
other information that might be of interest.


Why the upsampling fuss at all?


1. Your external resampler might deliver a better quality than the internal ON-DAC
    resampler. 

2. Then there are certain DAC chips e.g. the TI PC51xx family, which bypass
     the internal filters altogether at 384k/352,8k samplerates. 


The OnDac filters and DSP functions usually have (quality) limitations due to HW limitations.

Bypassing or replacing them seems to be a feasible option to squeeze a bit more out of these DACs.

On the other hand: 


Every data manipulation (DSP) causes losses! 

Upsampling is not a lossless process either.
Hmmh. Then why upsampling at all?? 
There are cases where upsampling might be the lesser of two evils!
The "losses" equation in our case would be :

On-Dac filters VS. sox upsampling incl. level reduction! + higher processing load 


Can I predict the outcome???


Nope.


Just try it and find out yourself. 

The results most probably will differ from setup to setup - as usual.

###############################################################


Let see how far we'll get with below:


The example relates to LogitechMediaServer (LMS ) on a Linux platform 

such as Ubuntu. Adaptations to OSX or Windows can be made quite easily.


I'd assume that you have installed the  LMS 7.9.x version.


Note: 

I would not recommend to run CPU consuming upsampling neither on a RPI -
using squeezelite, nor a NAS  nor on any other low performance platform. 

That 's why I do not cover these scenarios!


I'm using a x86_64 platform - an Intel Core5 Broadwell NUC as LMS server (I use this machine for multiple tasks, incl. my everyday desktop work).



We'll be  using the excellent sox tool to do the upsampling. 




###############################################################
Preparations:

Make a backup of your server first! You do everything at your own risk!


First, we setup the environment.


Ok. Ok.  Wait a minute!!!


Before you go any further, and perhaps waste several hours, run a simple test first! 

Just offline-convert and play some of your favorite flacs
That's rather simple to accomplish.

Examples:


sox  -D test.flac -t flac -C 0 -b 24 test352k8.flac rate -v -b 95.0 -p 50 -a 352800 dither -S

or
sox  -D test.flac -t flac -C 0 -b 24 test384k.flac rate -v -b 95.0 -p 50 -a 384000 dither -S 

(replace filenames according to your filenames)

if you experience clipping try below:


sox  -D test.flac -t flac -C 0 -b 24 test352k8v092.flac vol 0.92 amplitude rate -v -b 95.0 -p 50 -a 352800 


However. The last example will lower the signal level before conversion.
Keep in mind that you should also run that volume adjustment over the original without
samplerate conversion, otherwise your comparison might be misleading!  

Example: sox  -D test.flac -t flac -C 0 -b 24 testv092.flac vol 0.92 amplitude 


And then there's another one, which can make these files perform different!


The compression level!

You probably don't know what compression level's been used while generating your flac.
You can't figure it out later on!!!!

I always use compression level 0 !

Just to mention it. Even below example, which just potentially applies a new compression level can already make a difference:


sox  -D test.flac -t flac -C 0 -b 24 testC0.flac

The samples are the same, but the compression level might differ. 

The decoding process usually is slightly higher with higher compression levels. 
A realtime decoded stream might show some impact!  

###

If you're happy with above test results you might want to continue here:


We better use the latest sox, since the LMS delivered sox is about 10 years old!

And then on top of that - a real issue -  the LMS supplied sox binary is extremely inefficient.

You run (just copy/paste - block by block ) following commands:


sudo su



/etc/init.d/squeezeboxserver stop



apt-get -y install sox 

cd /usr/share/squeezeboxserver/Bin/x86_64-linux/ 
cp sox sox.orig
cp $(which sox) sox
cp $(which sox) sox2
chmod 770 sox*
chown squeezeboxserver.nogroup sox*


apt-get -y install flac 

cd /usr/share/squeezeboxserver/Bin/x86_64-linux/ 
cp flac flac.orig
cp $(which flac) flac
cp $(which flac) flac2
chmod 770 flac*
chown squeezeboxserver.nogroup flac*


Note: 

By using the Linux/Ubuntu platform flac instead of using the LMS supplied flac, we'll loose the fast forward and rewind functionality. The LMS supplied flac is a patched version of the original. It's outdated though!

cd /etc/squeezeboxserver
test -f custom-convert.conf && mv custom-convert.concustom-convert.conf.orig
touch custom-convert.conf
chmod 644 * 
chown squeezeboxserver.nogroup * 



###############################################################

Now we look at the actual conversion rules.
Below are two of them including my preferred upsampling settings .

1. The first rule reads flacs,  upsamples them at highest quality with linear filters and outputs flacs again.
2. The 2nd reads flacs upsamples them and outputs .wav format.

Note: I do not use level adjustment below! There will be some clipping ( a couple of hundred clipped samples is nothing unusual) !


Both rules you can enable /disable under LMS server/settings/advanced settings/file types



The conversion rules would like this:


flc flc * *

# F
[sox] -q -t flac $FILE$ -t flac -C 0 -b 24 - rate -v -b 95.0 -p 50 -a 384000 dither -S 

flc pcm * * 

   # F
   [sox]  -q -t flac $FILE$ -t wavpcm -e signed  -b 24 -rate -v -b 95.0 -p 50 -a 352800 dither -S  


Note: We leave the result at 24Bit (-b 24).


Below I added an example of how to configure MAC based routing on LMS.

You basically have to replace the 2nd wildcard with the MAC address of e.g. your RPI 
to run the upsampling just for a single specific client - identified by MAC. 
All other clients would run  the default conversion rule (no resampling).


flc flc * b4:27:eb:dd:04:de

# F
[sox2] -D -q -t flac $FILE$ -t flac -C 0 -b 24 - rate -v -b 95.0 -p 50 -a 384000 dither -S 

flc pcm * b4:27:eb:dd:04:de 

   # F
   [sox2] -D -q -t flac $FILE$ -t wavpcm -e signed  -b 24 - rate -v -b 95.0 -p 50 -a 384000 dither -S 



You might have noticed that I used "sox2" as binary here. I actually made a 2nd copy of the

sox binary during the prep phase.
This trick I use to differentiate the different rules inside the LMS advanced settings/file types
menu! There won't be any other differentiator than this!

You probably run mainly 44.1kHz samplerates. I'd say go for a 352800 setting in

above examples. Just replace the 384000.


I do also have a wrapper-script that does synchronous upsampling. E.g. 44.1 to 352k8 and 48 to 384k. T
hat's worth a different article though!




Let's start setting up the resampling rule for all of your clients at once:

Let's assume you're still logged it.


Now we run:


sudo su


cat  > /etc/squeezeboxserver/custom-convert.conf << 'EOT'

flc flc * *
# F
[sox] -q -t flac $FILE$ -t flac -C 0 -b 24 - rate -v -b 95.0 -p 50 -a 352800 dither -S 


flc pcm * * 

   # F
   [sox] -D -q -t flac $FILE$ -t wavpcm -e signed  -b 24 - rate -v -b 95.0 -p 50 -a 352800 dither -S 

EOT


Now we can restart the server.


/etc/init.d/squeezeboxserver start


or just reboot the machine.



The difficult part is done!



Two more steps:

1.
Under LMS/setup/advanced configs/filetypes we need to enable above custom rules "flc flc" and "flc pcm" .
Don't forget to push the "save" button. 

2.

squeezelite config requires a slight adjustment.

To allow "server based upsampled PCM" support, you need to add "-W" to the optional parameters.

This setting allows to extract the samplerate from the PCM header, since the server can't tell squeezelite what samplerate you're upsampling to.   
For "flc flc" conversions this (-W) is not required. 
However. Just leave  "-W" in as default setting. It won't hurt.

Now the final step - another hidden squeezelite feature.


You can define or limit  supported codecs with  "-c" for squeezelite. 

By default all supported codecs (flac,pcm,mp3,ogg,aac,dsd, mad+mpg for specific mp3 codec) are enabled.
If you now have several conversion rules activated at the same time on LMS - e.g. "flc flc" and "flc pcm" which conversion rule will be taken??? 
The server will communicate with squeezelite about it. The process basically follows the rule "first come, first serve". 
Unfortunately by using the squeezelite default codec setting (all enabled), 
we don't really know and we can't change the sequence, because that's hardcoded. 
That's why we make use of the "-c" option for our squeezelite setup.

Examples:


-c pcm,flac,mp3


would mean that the "flc pcm" rule would be taken to convert the stream from flacs.

Because - yep - "pcm" is the first on the list.

-c flac,pcm,mp3


would mean that the "flc flc" rule would be taken to convert the stream. I guess u figured 

out why!?!?

Note: Above feature had been introduced not that long ago  - at the beginning of the year 2017If you run an older squeezelite version make sure you get the newest.

After that you just restart squeezelite and hope nothing went wrong.



Finally:


Once you ended up being unhappy with all above,   just run a restore.

To restore the original situation below should do:

sudo su


/etc/init.d/squeezeboxserver stop

rm /etc/squeezeboxserver/custom-convert.conf
test -f /etc/squeezeboxserver/custom-convert.conf.orig && {
mv /etc/squeezeboxserver/custom-convert.conf.orig /etc/squeezeboxserver/custom-convert.conf
}
cp /usr/share/squeezeboxserver/Bin/x86_64-linux/flac.orig /usr/share/squeezeboxserver/Bin/x86_64-linux/flac  
cp /usr/share/squeezeboxserver/Bin/x86_64-linux/sox.orig /usr/share/squeezeboxserver/Bin/x86_64-linux/sox  
/etc/init.d/squeezeboxserver start

###############################################################


There's one more option. I don't like it, but I'm pretty sure  many of you would 

like to try it.  Because this setup is most easy to accomplish. 
Your PI will have to be able to manage it though.
If your PI locks up and causes this or that issue, don't blame me!

Below an example of squeezelite doing highest quality resampling.


squeezelite autodetects the maximum available samplerate that your device/driver supports.


If you add below options to your squeezelite config you'll run a highest quality resampling, with  0.5db attenuation ( squeezelite default is attenuation is 1db! - you  also could disable the attenuation by inserting  "0" instaed of "0.5", which will - as discussed earlier - generate  certain clipped samples).


-R -u vLs::0.5:28:::


Then restart the player and enjoy.

Above example would resample (sync-mode) e.g. 44k1 to 352k8

You could also add an "X". This will resample everything (async-mode) to the highest available rate.



-R -u vLsX::0.5:28:::

Above example would resample e.g. 44k1 to 384k


###############################################################


That'll be it. 
Now, after writing all of this up, I kind of realized that the whole thing is actually not 
that simple. It'll take quite some enthusiasm, especially for someone with limited Linux 
background, to get going. 

If you're stepping over flaws or open questions while running above exercise - just let me know. 


Enjoy.

CD - RIP

$
0
0
Anybody out there still ripping CDs?  Or is it just me. ;)



Recently I figured that some tracks were missing. No idea when and why this happened. 

OK. Quick decision taken. Re-Rip that CD. Yep - I still keep my CDs in the attic. 

Gee Wiz... Re-Ripping a track "quickly" turned out into a major exercise (nightmare). 

That's the story about this exercise.





Intro  

I've been using dBPoweramp  (dBP) for the (mass) audio extraction job in the past. 
It used to be my reference. And I think it still is the reference extraction tool out there. 
Good old EAC comes close to dBP of course and it's still (2020) free. 

I havn't been ripping CDs very often in recent years. As soon as I started this project I figured, I'm running an outdated dBPoweramp version.  Sh.. . It happened before. Every time I want to use dBP I have to pay ~$30 for the next upgrade. Grrr.

Hmmh. It's just one track that I need. Nope - I am not willing to pay anything for dBP anymore.  

Beside that, dBP wouldn't work properly within a virtual environment (Virtualbox) anyhow.
I - as a Linux user - am running my W10 installation under Virtualbox btw.


Background: I figured that the CD-drive parameters required for Accurate Rip to work won't be accessible in a virtual environment.  

I didn't feel motivated to give EAC under W10-Virtualbox a try either. It'll be the same situation.
In the end I'd require a full Windows 10 installation to run these tools. Nope. Not at this point.

What a mess! A dead-end.


I kept going.


Let's have a look at Linux - my actual home-turf. 

How is the situation on the Linux CD extraction field nowadays? It has never been great - that's why I've been using Windows based tools for that task, pretty much the only Windows tools I've been using. 
I've been quite pessimistic that the Linux ripper situation evolved a lot. I rather expected the opposite.

I wasn't wrong with that prediction...

Over more then 2 decades there are/have been several, some of them quite promising, projects out there, such as

ABCDE, ripperX, Rubbyripper, Morituri, Whipper, soundjuicer, Asunder and some more...

After having a closer look, which took me another 2 hours,  I figured that many of these projects are outdated - simply not maintained - thus pretty much dead - and that since years!
The most advanced of the bunch would have probably still been ABCDE

However. 

ABCDE is not even offering Accurate Rip support. To me this is the crucial feature for comparing my rips with others.


Again. What a mess!!! I considered the Linux CD extraction application arena another dead-end.


I kept going.

Let's think.

What actually are the crucial parameters to get a close to perfect copy of an audio CD?


  1. an intact CD - flawed, scratched and dirty CDs just waste your time and energy
  2. a reference database - you need to know if your extracted data is correct
  3. a widely supported and secure data format - flac and flac only !
  4. a reliable tag database - however - you won't ever get around manually editing your tags anyway!
  5. a source for album arts - Google will be your best friend on that one

That looks manageable. I finally decided to set up my own extraction process. 


Let's see how this one goes.



CD Extraction process

CD-Drive and Accurate Rip


The CD-Drive of choice needs to be present in the Accurate Rip database which lists the key Drive-Offset-Correction parameters. Applying the right offset correction is a must to end up 
with a Accurate Rip. 
Accurate Rip is the only Audio Extraction Reference method I'm aware of that generates repeatable results. Using AR is the only way to verify your rip results - each and every single track - against a common reference and other peoples rip. If 50 people have the same result
you can be sure that your rip is OK. 
Bottom line. Not using AR will prevent from getting accurate rip results. AR is therefore a must.

How it's done. A checksum gets generated 
per ripped track . These checksums are compared against checksums as being stored in the AR database. The the disc-id, every CD has a disc-id, is used as CD identifier. The AR database holds the results of million of rips. If the ripped track matches the AR database, the rip can be considered "Accurate". 

Can Accurate Rip be trusted? I turned out to be a very good question!!!

A little history.
The AR reference was derived from a - a single patient-0 - CD-drive. 
All other rips, if AR is used, were and will be adjusted according to that patient-0 reference drive offset. 
During the rip, the extraction tool adjusts its input data stream from your drive that much that the ripped result from your drive equals the patient-0 drive result. How many samples are to be adjusted is defined by the offset-correction parameter. 
This is done by redefining the track borders - moving to earlier or later samples - to end up with the same result as patient-0. The drive-offset differs from drive-2-drive. AR offers quite a database with drive-offset-correction parameters for numerous CD drives out there.

And now comes the catch. These AR Rips are actually NOT accurate. These rips are first of all identical!! Identical to the rip as if it would have been done on patient-0.

You might already guess where this is going. A developer did some research and figured

The patient-0 reference is wrong!!!!!! (meanwhile confirmed by the AR designer!)

It was found not long ago that the AR drive offset is off by 30 samples. (I'm sitting here and shake my head every time I think about this debacle.) Can you imagine all rips done based on
properly applied Accurate rip offsets are flawed!?!? The designer simply responded with something like, sorry it can't be changed anymore.

Anyhow. How bad is this? There are different opinions about it. In my opinion it's bad.
You simply can't call the tracks Accurate anymore.

However. Should I forget about AR now? I'd say, not having any reference is probably worse then having this 30 sample flaw in place I gonna end up at east with Identical Rips.
I could also adjust my offset by 30 samples and end up with a finally Accurate Rip.
That one obviously can't be checked against a reference database. Hmmh.


Just to mention it. Keep in mind. This issue of course also affects dBPoweramp and EAC rips!


Again. Do 30 samples really matter? To me it remains a valid question.


I decided to continue to use the flawed offsets.

Let's get it done. What do we need?


1. Extraction drive



I had a look at Amazon and Accurate Rip database. And read some discussions here and there.

A reasonable device for the job seemed to be a Lite-ON eBAU108 drive.
It sells at around 25$/€. 

Its drive-offset ""correction"" is listed with "+6" in the Accurate Rip database. That means the actual drive offset is "-6". Keep that in mind!!!
        
That'll do.
    

2. Extraction tool


    Pretty much all Linux CD extraction tools are making use of a low level extraction tool
    called cdparanoia
.

    It's a basic commandline tool offering a wide range of features for a
    reliable and high quality extraction job.


    You can assign your CD-drive drive-
offset, which is required for identical Rips in line
    with Accurate Rip.


    We'll end up with non-tagged .wav files after this step.

3.  WAV to FLAC conversion


     With a simple one-line command we can convert the generated .wav to .flac. We'll use the 
     "flac" commandline tool for it.

4.  Accurate Rip verification


     That's been a tricky one. Luckily somebody wrote a Perl script that does a job.
     The tools looks up the discid in CDDB scans the files and compares the result with the
     ACR database. Lean and clean. 

     We'll use that one.

5. Tagging


     My favorite Linux tagging tool is Puddletag. It gets us access to the known tagging
     databases like CDDB or MusicBrainz. And it also offers to create
     filenames and directories based on the chosen tags.
     You simply load the new untagged flacs into Puddletag and let it fetch
     the tags.

     Unfortunately Puddletag is no longer maintained. The original was based on 
     Python 2.7+. Since 2020 Python 2.x is dead. The good news there's a Python 3.x
     spinoff under development. It's already working partially. However. For now it 
     has to be installed from git sources.

6. CoverArts


    Google image search will be your best friend.


That'll be it.



Now the fun parts starts. 

***********************************************************************************************

The ripping - Prep stage


Some basic hints supporting your CD extraction project - no matter what tool is being used:

  • I'd suggest to use flacs as target format with compression level 0 (see my flac articles)
    Forget wav or e.g. no-compression flacs! 
    wav tagging is very much not supported all over the place. With .wav files you don't have a way to check if the files are corrupt.
    No-compression flacs are much slower from a decoding perspective!
  • Make sure your flac encoder uses the latest flac code!
  • You should look for highest quality images as coverarts via Google image search
    Look for clean images with a minimum resolution of 500x500 "square" pixel dimensions. 
  • Make your choice for the coverart filename - and then keep it for all your CDs
    I recommend to use "folder.jpg" for all of them.
    I do not embed coverarts into the files btw! 
  • Usually you can't or don't want to use the default tag and file structures offered by whatever tools you'll be using. Please have a closer look at that!!!
  • Use mp3tag under Windows or puddletag under Linux to edit and/or add your tags.
  • Have a closer look at the "genre" tag. To me this is a very important tag in dealing with my quite large collection.
    In 99% of all cases where I'm not looking for a specific album I first select the genre and then the album underneath.
    Very often the Genre tag is not set at all or set properly by the online databases! 
  • Folder/file structure 
    Below you'll find my preferred structure: 
         /music/folk/Norah Jones-Come Away With Me-2002/02-Norah Jones-Come Away With Me - Seven Years.flac

         That's the result of trying this or that over years.
         
         Most tools give you by default e.g.
      
   /music/Norah Jones/Come away with me/01-Seven Years.flac

         As you can see the actual flac filename doesn't tell you anything. No genre reference.
         No year.  It's IMO all but convenient to manage such a collection.


  • Classical Music Tagging
         That's a challenge with above structure, which is actually based on what's being
         offered by the databases and tools out there. You better stick to it! You actually can't
         get around it.
         Otherwise you might end up with compatibility issues depending on what player app
         you'll be using.

         CDBB or Music Brainz won't get you proper or consistent tags for classical albums.
         There is no way around to 
manually edit these tags!

         Just a hint. The way I do it. 
         I add the (to me) key artist - soloist or conductor or orchestra - depending of the album             into the artist tag. Usually the conductor/orchestra/composer goes then into the album
         tag.

         Make sure you have a great coverart that pretty much explains that classical CD it all!
         You'll appreciate it later on.
  • Others things to cover

    • various artists/samplers
    • different CD dates (first release/remaster1/remaster2)
    • sample rates (e.g. add "-2496" to the albumname)
    • CD sets ( add CD1/CD2 to albumname and you could e.g. use the discnumber tag)
    • and probably more

Above list hopefully should have made clear you need to have that engineering done before you start the project!

Do a test run with a couple of different album/genre rips and tools to make sure you can handle the whole thing properly. You don't want to re-rip or re-tag hundreds of albums! 

If your process and tools are well prepared you should still calculate 10 minutes
effort for a properly ripped, tagged (edited) and stored CD.

Don't forget to introduce a safe backup strategy! Run backups during the rip project!

The Ripping  - The Linux way

Due to lack of all-in-one (GUI based) tools under Linux  this exercise became a "hardcore" commandline exercise on a Linux system. 

If you know how it's done, it's not that difficult after all. I'll roughly outline the process.

1.
Install 
  • cdparanoia (rips CD - cdparanoia is available in most repositories, not maintained though!)) 
    Note: There's a libcdio-paranoia project. It is built on cdparania and up2date libcdio.
    It comes with the binary called cd-paranoia and accepts the same parameters.
    I'd recommend to used that one!
  • flac  (converts wav to flac - it is usually already installed on most systems)
  • ARCFlac.pl (checks ripped and converted files against Accurate Rip - must be installed manually!)
on your system 

2. 
Attach your CD drive and insert the CD

3. Open a terminal
    
Use your own drive offset correction (looked up @ Accurate Rip) and directory names in below example!
Note:  cdparanoia requires the offset-correction parameter!
 
WORKDIR="/tmp/Eric Clapton - Unplugged - 1992"
TARGET="/music/blues"
OFFSETCORRECTION="6"
mkdir -p "$WORKDIR"
cd "$WORKDIR"
cd-paranoia -Bw --sample-offset  "$OFFSETCORRECTION"
ls *.wav  | while IFS= read I; do
flac --compression-level-0 --delete-input-file -w "$I"
done
ARFlac.pl "$WORKDIR"
cp -r "$WORKDIR""$TARGET"




4. Adding tags

Now you can open puddletag and add your tags and change the filenames (tags>>filename).

5. Add you downloaded and renamed covert art "folder.jpg"

6. Change folder and file permissions, if needed


And that'll be it.

Wrap Up


You'd think ripping and tagging a single CD "properly" should be an easy task. 

It isn't. Unless you really don't care about the quality you end up with.

Ripping a whole collection would then be a hell of a project. And it is a hell of a project
if you run a Linux system. 

I'd say. Go for EAC on a Windows machine. It'll do much better than a Linux based approach. 
Or do the math for signing up with a streaming service. That'll save a lot of work and time.
And keep in mind. Buying and managing disks and backup disks will also cost you several hundred $/€ over the years!



Anyhow. Good luck with your project. It's gonna be a challenge.
 
And, speaking for me, with above method I made myself rather independent from using any bloated and unsupported tools. Next time - I hope - ripping a CD will take me just 10minutes.

Enjoy.



######################################################################


Resampling - If you can't avoid it...

$
0
0
(Last update: Jun/2020)

Intro 


I've been looking at the subject of audio data resampling now and than.
Somehow you can't avoid looking into it - for several reasons.

In my case my HW setup (DAC) suggested to try resampled data to achieve an overall potentially better performance. 

While scanning the net I stepped over a lot of very fragmented and also incomplete 
information. 
Quite some stuff out there has a marketing flavor attached to it. And obviously you'll find numerous different opinions about the best way doing things.

With this article, I'm trying to cover certain aspects, such as


  • What is resampling or upsampling/downsampling or oversampling?
  • Why resampling?
  • Quality factors and quality targets
  • What tools and applications ? 
  • Implementation on LogitechMediaServer
  • Offline resampling
  • Examples



Let's get started.




What is resampling, oversampling, upsampling, downsampling?


Basically "resampling" is the overall expression for changing a given samplerate to another samplerate.

Up- and downsampling are giving resampling directions.

There are different kind of methods to go up or down. 


1. Asynchronous resampling (44.1kHz  to 48    x N or    48kHz to 44.1 x N)
2. Synchronous resampling   (44.1kHz  to 44.1 x N and 48kHz to 48 x N

As ususal a Wikipedia article explains quite well what we're talking about.

Basically "N-1" samples will be added after each sample. 

At a resampling factor of N=2 we'll add one extra sample after each sample. 

4 samples at values 1-3-5-2 will become 8 samples at 1-0-3-0-5-0-2-0

Yep. I know. Just adding 0s or keeping the new samples at the same level wouldn't 
be of any help.

It requires a so called interpolation filter to fill the added samples with relevant content. 

Let's apply a interpolation filter.

Starting with 1-3-5-2  would get us to 1-2-3-4-5-(4?or 3?)-2-1  

Hmmh. You can see that we also run into issues with this interpolation method:  

This approach would be called linear interpolation. As you can see at certain values the situation gets ambiguous.

We just connect 2 original samples by a line and use the in-between value on that 
line for the new sample. 

Since there's nothing linear on waves you can see that this method will also cause losses/distortions. We end up with a sine wave of a finer granularity staircase that shows some linear slopes. Not perfect. But potentially a bit better. 


How much of impact of these flaws we'll notice ( or better hear) later on is a different question.

Let's keep it for later.


Why resampling?


There are several "good" reasons why audio designers and manufacturers resample the incoming data.


1. You fight aliasing effects

     Aliasing effects are frequencies in the audio band that don't belong there.
     It starts with frequencies higher than the Nyquest band (sampling frequency/2) 
     >> 22.05 kHz for 44.1kHz data that fold back into the audible range.

     That's the short version. ;) Have a look at Wikipedia for more on that.
     
    The required aliasing filter has to be very steep. It would require a very steep
    filter at about 100db attenuation 
to get all the dirt out from 20 to 22.05kHz  - in theory
    if you don't want to start the filtering in the audible audio band.

    Steep and aggressive filters usually come at a price. Usually they generate plenty
    of distortions. And if you're close to the audio band that's not good. These distortions
    might become audible.

    If you start your filter lower than 20kHz, you will obviously not just impact the distortions, no      you'll also impact the audio signal.

    The filter quality can make a huge difference. There's a so called "Filter Brewing" thread
    over at DIY-Audio. You wouldn't believe what differences a filter can make according to
    the discussions over there.


    The higher the samplerates, the less aggressive - thus less intrusive - the filtering
    can be configured. 


    And that's the 1st reason to give resampling a try. Replacing aliasing filters that otherwise
    could have impact in the audible frequency range and move them up the spectrum.


2. You try to fight jitter

    I guess you heard about jitter. I simply quote Wikipedia:

    "jitter is the deviation from true periodicity of a presumably periodic signal"

    Low jitter in the single digit picosecond or femtosecond area is what a good
    DAC needs to deliver in 2020.

    Many DAC manufacturers are trying to fight incoming jitter by introducing
    an 
asynchronous HW resampler (ASRC). The whole stream will then be resampled by
    using 
a highest quality clock.

    By resampling the data with an asynchronous HW resampler (ASRC) you, as mentioned,
    reclock your data stream
 with that ASRC chip, which sits  right in front of your DAC. 
    In certain cases this method reduces jitter seriously. Further you'll also end up with a
    single 
samplerate. A samplerate that usually fits best to the chosen HW/DAC
    setup.

    As usual there's a catch. Usually the ASRC sampling algorithms are not of highest quality.
    And ASRC HW is not lossless and jitterfree either. These chips can introduce jitter or noise
    on their own.



3. One samplerate for all 

    Certain DSP applications (e.g. MiniDSP) and operating systems such as Windows, Linux
    Android, OSX can only work with a single samplerate in normal operation mode.

    They do it to be able to handle different samplerates from different
    streams, sources, processes, inputs at the same time. 

    Don't expect highest quality from these usually quite basic on-the-fly resamplers! 


4. You bypass a potentially lower quality HW implementation from the SW side

    See 2. 
    Usually most HW resampling implementations do not have sufficient horsepower
   (resources) to offer highest quality resampling algorithms. That can lead to mediocre
   results.


5. (Implementation) Cost




However. There's no such thing as a free lunch...




Digital Losses


Any digital filter  - respectively its related algorithms and calculations (e.g. volume control, convolution, crossover filtering, digital filters, resampling, equalization) - introduces losses to the original signal.

Usually multiple stages of data processing are passed to accomplish the resampling task.
E.g. if you resample your data, you won't just multiply your samples by a certain factor.
No, you'll add some new (artificial) interpolated samples that usually come with nasty side effects calling for more filters. 
And then you also have to attenuate the signal before upsampling it to avoid clipping
Low level detail will suffer most at this point. And on top you might add a little noise (dither).

Beside that you'll see additional load on a system caused by more extensive data processing. Usually this causes another type of distortion (e.g. noise).


Bottom line. There's a lot of stuff to consider. Ending up with an objective better result will
depend on quite a number of factors and a lot of in-depth research.


For me the driving factor of this exercise is to find the best solution or better "the better compromise".



My project



One of my DACs internally converts the samples to  352k8/384kHz. And then applies this or that filter afterwards.
The datasheet has shown that this DAC won't apply its internal filters at samplerates of 352k8/384kHz. 

That means if I now feed 352k8/384kHz data, these data will pass the resampler and filter.

I'm basically now trying to bypass these potentially mediocre on-dac filters, by using a top quality resampler on the PC side. 

The challenge at this point is, you won't find many IMO useful advise of how to configure the resampler in the best possible way. 
There are infinite ways of configuring your converters. Pretty much all of them end up with
a compromise. These compromises - as the term suggests -  are all flawed - some more some less. 

And that's why there's no "that's it" solution. It'll be a matter of taste and environment if
a solution works for you or doesn't.

A good starting point for a rather objectve approach is the infinite wave site
Over there you'll find pretty much all state-of-the-art resamplers compared side-by-side 
on a technical level by looking at the results.

Sound differences are not discussed! For a good reason! These discussions never really
end up well.

So. What resampler should we use then??  To make a long story short:

We are going to use the very powerful sound processing tool called sox.

sox IMO can compete with the best commercial audio resampling tools out there.
You get an idea once you've been on the infinite wave site.

I also compared sox with e.g. Izotope, Adobe Audition and Voxengo r8brain -- highly regarded commercial tools soundwise.
For me there was no reason to switch to either of the commercial siblings.

sox is very flexible, fast, runs headless, it's free and it's available on all computer platforms.

It's also part of e.g. the Logitechmediaserver or can be used with squeezelite or MPD. 
Even JRiver Media Center under Windows makes use of sox.



DACs


The vast majority of DACs and soundcards run samplerate converters in front of the actual
DAC chip.
Nowadays many devices lift samplerates up to 352.8/384kHz. Some (E.g. Sabre DACs) even higher than that.

You'd actually need to find out what your DAC is doing before proceeding. You need to find out if and how your DAC resamples your data. That'll define if you should do this exercise at all  and if so it'll get you your target samplerate.
On e.g. Sabre DACs you can forget this exercise. The internal samplerates are much higher
then these we could generate with sox.

Your DACs internal processing samplerate, should then be your target samplerate within the discussed test-cases and examples later on.

There are traps of course.

Some devices do internal DSP processing prior to the DA or even ASRC process.
It might happen that the data gets upsampled to just 96 kHz first to e.g. feed an internal DSP. Than this could be your target samplerate and the actual DAC chips or ASRC rate wouldn't matter anymore. You need to get to your your a DAC is all I'm saying.

E.g. my Allo HAT DACs  based on the TI PCM51xx DAC family come with 4 selectable filters.
These filters get bypassed at samplerates of 352.8/384kHz.  That's my hook.
I just assume that these internal filters are not of highest quality and hope that my resampled
material just gives better results.

Bottom line. 

352.8/384kHz are gonna be my target samplerates.
My goal would be to experience a better soundquality through bypassing the internal filters by feeding upsampled material.


Quality


Let see if we can find out about different resampling qualities.

First I already mentioned  the  infinte wave  site. You'll find many resamplers and resampler configurations listed. It's the best place to get an idea about resampler quality.
Read the help  section over there first to understand what they are talking about.

You need to realize though that not just one parameter makes a good resampler.
A perfect graph in one area might cause artifacts in another area.
E.g. Very high bandwidth settings with very steep filters are causing a lot of bad ringing due to the filters. Especially the pre-ringing part is bad. Usually things sound quite aggressive and sharp.
On the other hand filters with no or low pre-ringing are causing serious phaseshifts, which usually translates into loss of details. 

Yep. That's an issue. You can't have it all.

Of course it's not all black and white. There are acceptable compromises out there.

You'll see that the  different 14.4 Sox resampling modes listed over at infintewave perform quite well.

Our task is now is to find the "best" resampling options with todays sox version.


I scanned the net and couldn't find any kind of common conclusion about what would be the best (compromise) set of resampling options for sox.
You'll find out about the theoretical and measured differences. But no common opinion about the best method from a listening perspective.

As I suggested earlier. Perhaps there's none. They are all compromises.
It might be a matter of taste after all, since all settings add their own sound signature!!!

It's a bit like rolling tubes.

But again. For me the task is to find the best compromise for a certain situation. If we cannot avoid resampling, we at least should get it done in the best possible way.

Below a copy of Sox resampling mode comparisons.





What you see is the pre- and post ringing filtering results of different options.
Ringing (pre-and post) effects are one type of filter losses they're talking about in above graph.
The nasty pre-ringing effects that you see to the left from the 0 line (purely mathematical leftovers)  should be avoided as much as possible.


You'll also see the "highly regarded"ssrc resampler results at the bottom in above comparison. ssrc is used by dbPoweramp btw. I won't comment about it: Pictures say more then thousand words. I also did listening tests comparing ssrc and Sox.
I've taken my choice.


Below you'll see an typical resampled impulse (source: inifinti wave) with a linear filter. The result should be a straight single upright line in a perfect world.
The waveform (ringing) to the left (pre) and right (post) of the center of the pulse are filter associated flaws. These artifacts will be added to the original music signal!!





We can continue with phase shifts, pass-band restrictions ( not the entire frequency band is transferred ) , noise , inter-modulations, aliasing... ...you name it.


There are losses. No question. We need to find the best compromise.
There are certain flaws popping up at extremely low levels of -170db !?!?
Levels, much lower then any DAC out there would be able to reproduce.
These should be inaudible.

However. We always talk of a mix of complex signals/artifacts at a certain time.
A complex audio sound, not a simple 1kHz test signal, might cause a lot or artifacts.
At one point the related issues fold back into the audible range. Obviously there are
audible differences between the different algorithms and tools.

Exercise


I put together some instructions for those of you, who'd like to play around a bit with resampling your 44.1/16 base data to higher samplerates on-the-fly.

Let's see how to implement resampling on a Logitechmediaserver to feed a SB Touch, Transporter or e.g. Squeezelite.

The key challenge is IMO to find the right balance between potential losses and gains when looking at the entire resampling process. It's not only about the above mentioned filter associated losses.

On the negative side, we also add jitter and noise to the environment by the additional load due to the processing of much higher sample rates and data volumes. This can make a small difference on top of the filter losses.

How does the final equation have to look like? 

Basically all resampling related flaws combined, need to have less impact than just loading your DAC with your original data.

That's the challenge. Let see how far we'll get.

The filtering itself gives you numerous options to improve or to mess with the base material.

III. Platform independent conversion rule cases


Below you'll find some conversion rule examples.
Everything will be resampled  from 44.1/16 ( or any other rate) to 354800/24 and sent down to the DAC as flac!!

If you like you can change the target sample rate up 2 384khz, by swapping the corresponding numbers within below examples. 

The actual key difference between case 1. and 2.  are the "-I" and "-L" options.

L=  linear phase response (pre=post echo)(=default sox setting)  and 
I =  intermediate response which is supposed to be a good compromise to 
M= minimum phase response ( no pre-echo, longest post-echo). (see also sox manual).
     Minimum phase introduces phase shifts in the higher range ( > 4khz).


-v      = very high sampling quality
-a      = allow aliasing above passband  (causes less post ringing)
-b 98 =  bandwidth of passband is set to 98 ( I tried 85, 90, default 95 - for now it's 98)
-s     = alias for -b 99
-D   = suppress automatic dithering


I skipped gain adjust in all cases. That might cause some clipping on a few samples. Of course you better should add e.g. the option " gain -1 " for attenuating the signal prior to resampling it. See Case 3.


Note: 

The resampled data are converetd to 24bit. This helps avoiding dithering (artificial noise) !!!

Keep in mind that the original 16bit CD data is dithered already. 
Dithering is the final task the mastering engineer does in the mastering process.
Basically there's quite an amount of artificial noise on your CD already.

Dithering twice - original and now during resampling - with potentially two different dithering algorithms might lead to lower quality.

However. There are opinions that say that all DSP calculations are done in 32bit/64bit floating point. Getting these down to 24bit again would justify to add a certain amount of dither. 

Again. As many opinions as options out there. 


Let's have a look at the cases.

Offline resampling


For your initial tests the easiest approach would be to convert a single test 
file manually, thus "offline".

Below example shows how you can accomplish offline resampling of a file from a Linux command line.

There are also GUI based sox tools available to accomplish this task. Try to google "sox GUI"


Offline Case 1 (my preferred setting- Jun/2020 )


Upsampling a flac file to 352800 Hz. I use my favorite filter settings.


  • the flac will be re-compressed to compression level 0 (-C 0)
  • the target bit depth will be 24bit (-b 24) (most DACs even support 32bit nowadays)
  • very high quality resampling (-v) is chosen (requires powerful CPU!)
  • 95.4% bandwidth (-b 95.4)
  • a phase shift of 45%  (-p 45) slightly away from linear (50%)
  • allow aliasing (-a) (no issue at 352k8 Hz)
  • a tiny bit of sloped TPDF dithering (dither -S)
  • dither with a target precision of 23bit (-p 23)
              
The result comes with rather low pre and reasonable postringing and with just a little phaseshift. I allowed aliasing (-a) to lower the ringing effects even further. And added a little dither. I have chosen 352800 instead of 384000 as target rate because 99% of the material out there is 44.1kHz. 

In most cases I therefore see synchronous upsampling.

Note: You need to swap the filenames matching your filenames in below command

##offline resampling - command line########################

sox  test1.flac -t flac -C 0 -b 24 test1-src.flac rate -v -b 95.4 -p 45 -a 352800 dither -S -p 23

#####################################################

That's already it. Now you can playback your new and converted file.

Realtime Resampling

Below cases show how the realtime conversion can be accomplished on a Logitechmediaserver.


Case 1 (my preferred setting- Jun/2020 )

Realtime Case 1 is the same case as the Offline Case 1.
   
Below you see the matching Logitechmediaserver conversion rules for target formats
.flac and .wav. These rules must be added to the custom-convert.conf file (see Annex 1 for help). The server has to be restarted after adding these lines.

Note: The "# F" means only local files get converted. No streams!

##copy below into custom-convert.conf########################

flc flc * *
             # F
             [sox] -q -t flac $FILE$  -t flac -C 0 -b 24 - rate -v -b 95.4 -p 45 -a 352800  dither -S -p 23


flc pcm * *
             # F

             [sox] -q -t flac $FILE$  -t wavpcm -e signed  -b 24 - rate -v -b 95.4 -p 45 -a 352800  dither -S -p 23


#copy above########################


NOTE1: 

You can do selective routing and resampling on a per client basis by adding your
clients MAC address to above rules like this:  

The 2nd asterisk gets replaced with your clients MAC:

flc flc * b9:af:e1:70:08:31


flc pcm * b9:af:e1:70:08:31


Having just two asterisks would mean that all your clients would receive resampled material.

NOTE2: 


For resampling to work for flac to pcm (wav) on LMS you need to add "-W" to the squeezelite startup options!
squeezelite than reads the samplerate info from the 
data file header and neglects the info send by the server.




OR



Case 2

   I = Intermediate phase is supposed to be the best compromise between
   M = minimum phase and 
L = Linear. 
   It stands for rather small pre-ringing, reasonable post ringing
   It still comes with the phase shift!

##copy below########################


flc flc * *

             # F
             [sox] -q -t flac $FILE$ -t flac  -C 0 -b 24 - rate -v -I -b 90 -a 352800  


#copy above########################



Case 3

  This is Case 1 with signal attenuation of 1db prior to resampling

##copy below########################


flc flc * *

              # F
             [sox] -q -t flac $FILE$  -t flac -C 0 -b 24 - gain -1 rate -v -b 90 -p 50 -a 352800  dither -S



#copy above########################



I stepped over an interesting discussion over here or here,  which led to cases 4a and 4b.
Basically a guy called "viruskiller" tried by using sox to mimic the highly regarded resampling  "apodizing" filters done by Meridian and as done by Ayre.
Avoiding ringing artifacts is the ultimate goal.

As you'll see cases 4a/b differ from above solutions. These solutions use minimum phase filters first of all and also lower the bandwidth substantially.
Minimum phase filters supposedly have no phase shift in the passband. And won't cause
preringing. Then there's aliasing allowed to reduce the postringing even further

Case 4a consists of a minimum phase filter and the bandwidth is limited to 20khz (-3db point) (90.7) which allows for less steep filters - aliasing is allowed which is causing less post-ringing, and shouldn't folds back artifacts in the audible range at these high target samplerates.

The examples 4a/b also recommend to apply a slightly sloped TPDF dither.

Usually you just need dither if you go down to 16 bit again. I left it in. It won't hurt I guess.


Case 4a (allegedly similar to filters offered by Meridian)


##copy below########################


flc flc * *

              # F
             [sox] -q  -t flac $FILE$ -t flac -C 0 -b 24 - rate -v -a -M -b 90.7 352800 dither -S



#copy above########################


Case 4b (allegedly similar to filters offered by Ayre)

##copy below########################


flc flc * *

         # F
         [sox] -q  -t flac  $FILE$ -t flac -e signed -C 0 -b 24 - rate -v -a -M -b 87.5 352800 dither -S



#copy above########################




#####################################################


Results


Using M - minimum phase  and I - Intermediate phase filter settings seemed to cause slight distorted sound to me. 

Maybe the phaseshift in the upper frequency range's causing it ?? (see infinite-wave charts for Sox M settings)

It translates into loss of low level details. I experience disappearing (smearing) of instruments in the back row of an orchestra . The separation suffers. It's like a softened picture.

Case 1 is my currently best idea. It's little softer then a pure linear maximum bandwidth approach, a little less sharpness, without that minimum phase blur and it also comes with  very nice articulated low end.

The basic idea behind of all this was the slight chance that some downstream electronics might respond better to resampled data. That's been accomplished in my case. I do prefer the synchronous upsampled approach on my Allo DACs that come with the TI-PCM51xx family DAC chips.


Above filter examples are IMO a good starting point. I am by no means a DSP filter guru. 
I tried to choose highest quality configurations, which made sense to me and which were available and discussed on the net.

I'm sure there's some more fine-tuning potential. From all I read during the last days there seems to be no right or wrong. Many people consider the discussions around the subject of theoretical nature when it comes to soundquality.

I for myself can hear clearly differences with pretty much every parameter change.

It's usually a matter of system, taste, expectations, experience what filter signature sounds best to you.

I highly recommend to give all that a try. You can't do anything wrong. 


Enjoy.


##################################################
##################################################
Annex 1: Custom convert-conf


Let's prepare your server for introducing the conversion rules to the custom-covert.conf file. 

I. Windows 7


1. Open notepad with admininistation rights ( right click on app within submenu accessories)
2. load convert.conf from c:\Program Files <x86>/Squeezebox/server
3. Save as "custom-convert.conf"
4. delete everything inside ( make sure that you don't delete your original convert.conf contents!!!!)
5. Copy one of below conversion rules into the editor
6. Save your "custom-convert.conf"
7. Restart your server 
8. Select "flc flc" option "flac/sox" in "file type" advanced settings and push "Apply".  

Deactivation:
9. To disable the setting just remove custom-convert.conf or its content and restart the server.


II.  Linux


You need to  generate an /etc/squeezeboxserver/custom-convert.conf
file with either of below conversion rules (lines between hashes).
The settings within this file will override the default 
/etc/squeezeboxserver/convert.conf settings. If you'd do the changes inside the
convert.conf, which would be an option,  your changes would be gone with the next SB server update/upgrade. Beside that it's not that easy to disable those
settings. By using the custom-convert.conf  you just move the file to e.g.  custom-convert.conf.bak and restart the server.

1. Open a terminal
2. sudo touch /etc/squeezeboxserver/custom-convert.conf
3. sudo chmod 666 /etc/squeezeboxserver/custom-convert.conf
4. sudo gedit /etc/squeezeboxserver/custom-convert.conf
5. Copy one of below conversion rules into the editor

6. Save your "custom-convert.conf"
7. Restart your server 
8. Select "flc flc" option "flac/sox" in file type settings and push "Apply".

Deactivation: 
9. sudo  mv /etc/squeezeboxserver/custom-convert.conf /etc/squeezeboxserver/custom-convert.conf.bak
10. reboot 




#############################################

CD Extraction

$
0
0
With this article I'd like to tackle an issue, which always makes me feel odd
when thinking about it.

"CD Extraction"...

... do I really have it 100% under control ...

... after hundreds of rips and years of active involvement in Computer based audio !?!?!


I just read two articles in the latest issue of the german audio magazine  "Stereo". Stereo is one of the biggest, if not the biggest of its kind over here in Germany.
I do think they got quite a good reputation in the market. 

This month (05/11) Stereo stuck their heads into Computer CD-drives and extraction software to compare the results generated by the tools and drives.

The real interesting thing about it was the result of it.
The soundquality ranking of extraction results on different drives and different software in particular - got my full attention.
As a matter of fact according to Stereo, drives make a difference and tools make a difference on soundquality too - and they are not talking about subtle differences if you look at the SQ-ranking. There are drives going as low as 88% (SonyOptiArc DRX-S 77 U) and extractions tools as low as 94% (EAC!!!!) on the SQ ranking.

Guess what. EAC - was the worst of all.

Do I have a reason to question that Stereo article, assuming somewhat professional test-cases and setups (using Accurate Rip etc.) !?!?!

Hmmh.


I'm considering myself quite open and tolerant if somebody reports differences on soundquality on things, which shouldn't be there in theory.
Have a look at my SB Touch Toolbox. It wouldn't exist if I'd followed all those "theories".
A theory works if you're aware of all facts and you'll consider all of them.
Or if you're "able" to explain exactly what's your theory is based on.
The more complex things get, the more difficult it gets to consider all aspects of course.
If you don't know all aspects, you'll have a hard time to explain them. Nor you'll be able to consider them.

Of course we all know that audio reviews in particular can't do more then giving a rough direction. Especially soundquality rankings are usually pretty subjective.

However - I honestly do have my doubts to hear any differences on presumably 100% identical files in a carefully chosen test environment. (Let see. Perhaps they'll invite me for a session. ;) )

What to do now? Torture Google, checkout the community at Audio Asylum,
get in Touch with Stereo, do a little study by myself!?!?!
 
Guess what - I did all that.

Google didn't really come up with satisfactory results. As usual lots of fragmented stuff.  At Audio Asylum I had to face the usual lectures and  Ulrich Wienforth - the guy in charge at Stereo responded (via mail conversation) to my "expressed" doubts by trying to defend the Stereo position. That was expected. And is IMO fair enough. (I do not see the usual commercial or marketing implication that you find very often behind typical reviews in this case. Because at least the extraction tools they tested were all free of charge.) For those who read those articles: What he didn't mention in the article was the listening test done via streaming client. That
fact he did mention via mail! ( If you know the article - I consider this an important information)

Just to make it clear. I do not question that Stereo experienced differences.
I do question their test cases and test environment first of all.

I knew I had to do more than just arguing. I wanted to make 100% sure
and show that all extracted files "can" be 100% identical.
I had to show them and myself that I'm right about that very minimum requirement.


To conduct my little study I intended to come up with a 100% waterproof comparison. My idea was to extract  the same track, with multiple tools
and drives and run a waterproof sha-2 256bit checksum over the entire file.
Those checksums provided by the extraction tools are IMO useless for comparisons purposes. You can't use them if you want to conduct a 1:1
comparison over different tools, since they all use different algorithms.

By doing the comparison the full-file way I cover PCM data, headers, 0-bytes and meta-data altogether. IMO the only way to compare apples and apples. You should know that just looking at e.g. the filesize wouldn't be sufficient.
If you change e.g. the drive-offset the file size remains the same, but the content will differ!

I wanted to make 100% sure that a difference in soundquality can not be related to the most obvious issue -- a different file content first of all.

There might be other issues like disk fragmentation causing a different load thus different jitter during playback of identical files or similar. But let's keep those speculations for a later stage. There are easy ways to cope with this issue.
First I'd like to prove that all files can be extracted 100% identical,
if we leave out things like e.g. very messy drives or e.g. scratched CDs.


CD Extraction tools - Analysis - Summary
-------------------------------------------------------------------------

I tested three different subjects:

1. Extraction Tool Comparison
2. Drive Comparison
3. Flac en-decoding

My choice of tools  were cdparanoia (Linux), iTunes, dbPoweramp, EAC and foobar. ( pretty much in line with the Stereo review - they didn't have dbP on the list because it is non-free - and I put it in because to me it the reference app)

The target format is the riff-wav format at16 bits and 44.1khz.

I ran test-cases with and without drive-offset on a Plexwriter Premium 1
drive ( still a reference drive)  and a standard  Toshiba DVD 5372V.

I ran 2 different checksum tests on the extracted files:

1. internal PCM sha-1 checksum - with shntool
2. sha-2 checksum over the entire file - with sha256sum

for rips with and without drive offset corrected.


Result:

1.
cdp,eac,dbp and fob deliver 100% identical results on all tests.
There are not any differences to identify on the file, which actually makes the PCM test obsolete.

Note:
Meta-data settings must be disabled within EAC and dbp to avoid
a meta-tag footer (yes - within a .wav file ) of the corresponding .wav file.
This I figured out because of running the PCM checksum test-case!!!
The PCM checksums were identical on those files, the file checksums were not.

2.
iTunes delivers identical results compared to the other tools on test-cases where NO drive-offset (non-compliant Accurate Rip mode) is configured.
Drive-offset corrections with iTunes are not possible!!!

3.
The different drives I tested deliver identical results if the Accurate Rip drive-offset is used. Without AR drive-offset configured the drives do not deliver the
same PCM data!!! That's why iTunes will never deliver identical data from different drives.


4.
JOOC I added a flac en- and decoding test-case to my little study.
I wanted to verify if the sha checksum of the original .wav file remains identical after flac en- and decoding. As a part of the test I added tags to the flac for the encoding. I also used different compression levels for the test-cases.
I can confirm that - as expected - the pre-/post-checksums are identical.



Conclusion:
-------------------------------------------------------------------------

From my perspective the results look great.

The tools "can" deliver 100% identical results. The key challenge is
to configure them correctly to do so.

It required some digging into the setup menus of EAC and dbPoweramp
to get there.


When it comes to the drives. Extraction results would all differ, unless you'd
use the drive-offset as outlined by Accurate Rip.

There seems to be one question mark behind the Accurate Rip drive-offset database though. It's been discussed on the NET, with the EAC designer Andre Wiethoff that the drive-offset as specified by Accurate Rip might not be correct. 
The number should actually be 30 samples lower then the AR reported number.
That would mean that the entire Accurate Rip database would not be
correct. And afaik it hasn't been corrected since then.
When introducing their (dbPoweramp=Accurate Rip)  better AR checksum recently, AR IMO should have corrected also this potential offset problem too.


Though working with equal results for different drives would probably still be better than working without drive-offset adjustments at all.
Those todays "reference-data" would just be wrong in a very consistent way.
And if a consistently wrong drive-offset wouldn't harm the extraction result and SQ experience it would not be an issue at all to work with such a wrong offset.

The other problem is that Accurate Rip leaves out the first 5 frames (you'll also read about 2940 samples) of the first track and the last 5 frames of the last track. That's done on purpose. These are the critical usually inconsistent areas, when using different drives. They'd never manage to build a reliable reference if they'd include these data.

I do think Accurate Rip should fix at least the offset issue, if the "-30 sample" issue would be verified and confirmed.

It's not nice to have several thousands of samples out of equation either.
But I guess that's the price to pay for a standardized one-size-fits all approach.

The next thing to figure out is if the drive offset problem is really causing differences on soundquality. Folks, you're invited to do join the club of testers.
I'd guess that many of you guys reading this run highly resolving systems.
Note: If there'd be differences between those files, you wouldn't figure that out on a standard audio system.

Just subtract 30 samples from your AR drive-offset figure and do the rip again. (Please let me know if you experience any difference)

iTunes is not working with any drive-offset option right now.
If that's any better - I doubt it. I for sure wouldn't rip any serious data with iTunes. As mentioned before you'll get different results on different drives and you won't end up with any reference quality rip.
That applies to all other drives as well of course, if running the rips without offset correction.

Advise: Before you start ripping your CDs make sure that different drives and tools generate the same results. Just one wrong setting anywhere might change your result.

During my little study I run a sha256sum sha-2 ( state-of -the art)  checksum on each file.
That's more reliable then any checksum offered by any extraction tool or Accurate Rip and lets me compare the files.




Bottom line.

The tested drives and tools deliver 100% identical results if the configuration is done right. From that perspective there shouldn't be any difference on SQ.

If you'd don't use Accurate Rip drive-offsets, every rip will be different on every drive. And that might be an issue.

When it comes to the extraction tools. They pretty much all deliver exactly the
same data - if the setup is correct.

From my perspective this conclusion should be sufficient to provide a solid base for any further investigations.


Finally. I did it. I sat down and listened to all of the test files that I generated. 
Honestly. I got a hard time to identify any difference between the files.
Perhaps my system is not good enough. Or my hearing capabilties are just not sufficiant. I'd love to sit down with those Stereo guys to run that test at their site.


I hope you find that article somewhat interesting. As always - feedback  is more than welcome.


In below appendixes you find the test results of my test cases and the test environment.


Enjoy.




.................................................................................................................................
Appendix 1:
.................................................................................................................................
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
soundcheck's checksum test Rev 1 --- Sun May 15 11:15:52 CEST 2011
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::


:::TOOLTEST::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::


:::Checksum SHA-1 internal PCM - Drive 1: with drive offset::::::::::::::::::::::::::::::::::::
8fc5a1f4332d6b20c9dcfbdb74220ada9b84ee0c  /track01_cdp_d1_o030.wav
8fc5a1f4332d6b20c9dcfbdb74220ada9b84ee0c  /track01_dbp_d1_o030.wav
8fc5a1f4332d6b20c9dcfbdb74220ada9b84ee0c  /track01_eac_d1_o030.wav
8fc5a1f4332d6b20c9dcfbdb74220ada9b84ee0c  /track01_fob_d1_o030.wav

:::Checksum SHA-1 internal PCM - Drive 1: without drive offset:::::::::::::::::::::::::::::::::
a82ff6f76c7f336db922c9d210b2d0d6a7cbead3  /track01_cdp_d1_o000.wav
a82ff6f76c7f336db922c9d210b2d0d6a7cbead3  /track01_dbp_d1_o000.wav
a82ff6f76c7f336db922c9d210b2d0d6a7cbead3  /track01_eac_d1_o000.wav
a82ff6f76c7f336db922c9d210b2d0d6a7cbead3  /track01_fob_d1_o000.wav
a82ff6f76c7f336db922c9d210b2d0d6a7cbead3  /track01_itu_d1_o000.wav

:::Checksum SHA-2(256) entire file - Drive 1: with drive offset::::::::::::::::::::::::::::::::
49eacbf1192931289158fbdd72ec56353d65aaa4086fa432eb161290049909e6  /track01_cdp_d1_o030.wav
49eacbf1192931289158fbdd72ec56353d65aaa4086fa432eb161290049909e6  /track01_dbp_d1_o030.wav
49eacbf1192931289158fbdd72ec56353d65aaa4086fa432eb161290049909e6  /track01_eac_d1_o030.wav
49eacbf1192931289158fbdd72ec56353d65aaa4086fa432eb161290049909e6  /track01_fob_d1_o030.wav

:::Checksum SHA-2(256) entire file - Drive 1: without drive offset:::::::::::::::::::::::::::::
43be2e2ecd260699165a874efaf7bcfc88c3e0a3c02f0b32fc821ce58786addf  /track01_cdp_d1_o000.wav
43be2e2ecd260699165a874efaf7bcfc88c3e0a3c02f0b32fc821ce58786addf  /track01_dbp_d1_o000.wav
43be2e2ecd260699165a874efaf7bcfc88c3e0a3c02f0b32fc821ce58786addf  /track01_eac_d1_o000.wav
43be2e2ecd260699165a874efaf7bcfc88c3e0a3c02f0b32fc821ce58786addf  /track01_fob_d1_o000.wav
43be2e2ecd260699165a874efaf7bcfc88c3e0a3c02f0b32fc821ce58786addf  /track01_itu_d1_o000.wav


DRIVETEST:::::::::::::::::::::::::::::::::::::::::::::::::::


:::Checksum SHA-2(256) internal PCM - Drive 2:::::::::::::::::::::::::::::::::::::::::::::::::::
f1f2cb11c121608ced3df64bca36051fe05e1638  /track01_dbp_d2_o000.wav
8fc5a1f4332d6b20c9dcfbdb74220ada9b84ee0c  /track01_dbp_d2_o701.wav

:::Checksum SHA-2(256) entire file - Drive 2::::::::::::::::::::::::::::::::::::::::::::::::::::
9ee0a2f83b2ebc5d9810bb4af343c9541b53c2e1462b4ea40e411e17eab759b2  /track01_dbp_d2_o000.wav
49eacbf1192931289158fbdd72ec56353d65aaa4086fa432eb161290049909e6  /track01_dbp_d2_o701.wav



FLACTEST::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::


:::Checksum SHA-2(256) entire file - FLAC en-/decode:::::::::::::::::::::::::::::::::::::::::::
43be2e2ecd260699165a874efaf7bcfc88c3e0a3c02f0b32fc821ce58786addf  /track01_cdp_d1_o000_flc0.wav
43be2e2ecd260699165a874efaf7bcfc88c3e0a3c02f0b32fc821ce58786addf  /track01_cdp_d1_o000_flc8.wav

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Tools used: 1. shntool 2. sha256sum 3. flac
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::.................................................................................................................................
Appendix 2:
.................................................................................................................................
Tools used:

Environment:

OS:
Ubuntu 11.01
Windows 7

Drives:
Plextor Premium 1 (AR drive-offset : +30)
Toshiba SDR 5372V (AR drive-offset : +704)

CD Extractors:

1. EAC
2. dbPoweramp
3. iTunes (Windows 7)
4. Foobar
5. cdparanoia (Linux)


Analysis:

Via terminal commandline:

sha256sum (Linux) - 8.5 - 256bit sha-2 checksum on entire file


sha256sum <filename>

shntool (Linux) - 3.0.7 sha-1 checksum on PCM data only

shntool hash -s <filename>

Of course I've written a little program to run all tests automatically.







Hires Audio - Treasure Island

$
0
0
Introduction
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Lets dig up some treasures. It's time to run another little study.




post over at Audio Asylum was referring to quite an interesting article about "fake" HiRes-Audio. files.

Within this article it's been stated that several HiRes audio downloads are potentially fakes.

What is HiRes? What is a HiRes fake?

The problem. The term HiRes is neither being protected, nor it clearly defines what can be expected.

The only thing that's defined is "HiRes Audio = audio data with greater than 44.1kHz samplerate"

Then. Why did HD-Tracks accept to replace potential fakes???  (for those complaining!)





They maybe just wanted to stay in business!?!?

*******

What would you expect if buying HiRes materials at a pretty steep pricetag???

I tell you what you wouldn't expect:

E.g. A simply upsampled CD!
E.g. A simply upsampled track from a low Q base track.

Tracks that comply to these conditions might be called "fakes" from a customer perspective.
Legally people would have a hard time to win a case over this - I guess.


Don't forget. You can't avoid "resampling" if you want to sell stuff at all samplerates.
You then better don't call your data "native"!
There'll be only one native samplerate - the one being used during the recording
or transfer-from-analog process.

HD-Tracks should clearly flag the download that's the "native" one. They simply don't do it.

HD-Tracks do has their understanding and commitment of HiRes defined nowadays.
I do find it rather hypocrite to claim nowadays in other words "other might sell you fakes - we don't"


***********


Fast forward. What's the situation today?

First of all. Why do we still talk about "fakes"?

* as HD-Tracks nicely suggests   ...others might sell you fakes...
* you might own some old HD tracks

I think it's good to keep an eye on it.

What actually is "native" and/or "master" quality, or at least,  what can be expected ?
That's actually not that easy to answer!

A bit of background.

In the beginning of the audio production process there's a RAW file,
actually several of them depending on the number of channels being recorded.

(Let's take the HW recording setup (mic/cable/ADC) out of equation. That obviously also causes losses on its own.)

From that point - having the digital raw files on the HDD - onwards these raw data will be heavily manipulated.
Several different tools - non of them is lossless - will cause a quality loss of the base raw data to accomplish a "pleasant" result.

If we talk about mixing, we talk about

* panning and level balancing (voice->center, guitar->right, bass->left)
* compressing (all instruments exhibit a different dynamic range - level that out)
* equalizing ( clean up the spectrum for pleasant and coherent sound)
* reverb (e.g. apply fake room echos)

Finally the original samplerate of the mix gets converted to a target rate and bitdepth, which
usually needs another compression ( to avoid clipping)  and as very final step there's dither (a well defined artificial noise floor ) applied.

Not only that these tasks do cause rather severe losses by definition, the tools, algorithms and filters used are not lossless either and mess with the result even further.

Still. This would be a "master" quality result.

With above mastering and mixing process in mind you might also understand why already a "remastered" CD at 44.1kHz might sound so much "better" then the original CD you bought 10 years ago.

If you just change your mastering tools after ten years, considering the evolution in the DSP arena,
a new master will sound completely different then the original master.
If you than play around with above mentioned tools (panning, compressing,...) you'll be able to convince the potential customers to buy that very same CD once more - preferably at a steep pricetag as HiRes.

To sum that up: A master mix is basically a result based on the "taste" of the mastering engineer and the tools he's been using. If you change either of it you'll end up with a completely different result!
No matter what samplerate is being used or sold to you! HiRes might have less impact then the remastering itself!

Linn e.g. sells "Master Quality" tracks at different samplerates. What does this mean to you?
By now you should have realized: Nothing! Linn would have to lay out what they exactly mean by it.

There is another term floating around -- "transfers".
There are e.g. DSD transfers or tape transfers.
Those do not have anything to do with the term "master" either.
These transfers just get the mastering engineer a set of new raw audio files as a base for further tampering.

If you now own a CD supplied at 44100Hz samplerate and 16bit, you can usually
assume that this is the result which is furthest away from the original recording - with maximum
tampering applied. And now consider this CD is used for making HiRes material!

Plenty of talking...


It's time to have a look at some of my own files.



::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::


First I'd like to see how to identify resampled material. I used the spectrogram feature of sox to generate below images.

Image 1. shows a native pinknoise spectrogram sampled at 24/96. As you can see the frequency content is shown up to sampling-frequency/2=48khz all over the place.



 Image1: Native 24/96 pink noise spectrum


The 2nd image shows a 24/96 spectrum based on upsampled 44.1/16 pink noise.



Image2: Upsampled 44.1/16 pink noise

As you can see. The original frequency range of a native 44.1khz file remains the same 44.1/2=22.05 if resampled to 96khz. You can't really hide the base data origin.



This example shows pretty clear the effect of upsampling in the HiRes spectrum. That's what we need to look for as the "fake-indicator", when analyzing real live cases.

To identify downsampling from 192khz to 96 khz should be much more difficult!!!

Obviously 48khz upsampling should also rather easy to identify. The  line in the spectrum of Image 2 would have been drawn at 24khz instead of 22.05khz.
And folks  - there are also 48khz based fakes out there.

But what happens if we face analog tape transfers, with quite a low base bandwidth. How to identify those?

Let see how things are developing. I hope my learning curve stays as steep as it currently is for a little while.




Real Live Cases
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

I have to admit that I'm be no means a specialist in audio file spectrum analysis.
Please tell me if my below interpretations or conclusions are incorrect or misleading. You're of course invited to support my investigations. Advise is highly appreciated by me and I guess the community reading this article.



I start with analyzing two examples of very recent HD-Tracks downloads.

...................................................................................................

Case 1
...................................................................................................

HD Tracks - Deep Rumba - Track 01 -  24bits 88,2khz
purchased and downloaded 05-2011


Image 3: Deep Rumba - Track 1 - Audacity plot


You can clearly see the lowpass behavior towards 22khz, which would be  characteristic for 16/44.1 material.
The dip at the end of the spectrum could be some kind of dither added later  after the resampling.

Hmmh. First try first hit!?1? What do yo think?


The 2nd spectrogram doesn't look that obvious anymore



Image 4: Deep Rumba - Track 1 - Sox plot

Here I see artifacts above 22khz.  And shaped dither at the top end. Hmmh.

What would you guys say?? Fake Yes/No?

Looking at the Audacity plot I'd say a clear "YES".  Sox makes me wonder if it is really that clear.

I need to figure out what's about those lines (harmonics?) in the plot. 

To me it's also more then unclear why 24/96 data is being dithered.

...................................................................................................

Case 2:
...................................................................................................

HD-Tracks Paul Simon - So Beautiful So What - Track 01 - 24bits 96khz
purchased 05/2011




Image 5: Paul Simon - So Beautiful So What - Track 1 - Audacity plot


You see the much wider spectrum towards the end, which is characteristic for that higher sample rate. It still won't tell you if that track is downsampled from an even higher frequency or transfered from DSD.  It clearly looks as if not being upsampled from 44.1/16.

To confirm the Audacity view - here is the corresponding Sox plot:


Image 6: Paul Simon - Track 01 - Sox plot

This spectrum looks somewhat different then the Case 1 spectrum. I'd call it awful!?!?
Energy all over the place. That makes me even more suspicious about Case 1.

Though I need to figure out why there's so much going on above 20khz on HiRes material. Is this real data or are these some kind of other garbage artefacts.
This I'd like to find out.

So far the good .

There are very obvious differences between those files. Both are sold as HiRes files - just to remind you. I still need to get those characteristics properly interpreted.

More treasures to be lifted. See Appendix 2.  You'll find all kind of spectra for different files sold at Linn, iTrax and HD-tracks. .

I at least - find this a real interesting exercise. ;)

Enjoy.

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::


Appendix 1: Generating Spectrograms with sox and audacity
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

I do all my analysis on an Ubuntu Linux.

I can very easily write scripts that look for my 2496 files and
generate all plots automatically. 

What you need to do first:

1.
First open a terminal and install the required programs:

sudo apt-get install sox imagemagick audacity

2.
Copy your HiresFile of choice to e.g. /tmp
You don't want to mess around with the original file.

Case 1: Sox
............................................................................................................
Copy/Paste below line into a terminal.


FILEX="/tmp/yourfilename.flac"; sox $FILEX -n remix 2 trim 0 30 spectrogram -x 600 -y 200 -z 100 -t "$FILEX" -o $FILEX.png ; display $FILEX.png &

Just swap out "yourfilename.flac" with your Hires-filename and then press return.



4. Done



Case 2: Audacity
............................................................................................................

If you want to analyze your own 24/96 tracks you can do it also by using  Audacity. It's freeware and available under Linux as well as Windows.


You just load your track into audacity.
Then you select "Plot Spectrum" from the "Analyse" menu .
And that'll be it.


:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Appendix 2:  HiRes - File Spectra
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

...........................
ALBUM 1
...........................


Looks OK to me.

...........................
ALBUM2
...........................



Hmmh. Could it be based on  a 48khz master?

...........................
ALBUM 3
...........................


Looks OK. This time an example with shaped dither applied.

...........................
ALBUM4
...........................


Hmmh. Pretty flat spectrum. I'm not sure what do do with that one.

...........................
ALBUM5
...........................



Looks rather Ok to me. Though I'm wondering about that distortion at around 16khz. You can even see its harmonic at 32khz. That can't be right.

...........................
ALBUM 6
...........................


 Looks OK - doesn't it.

...........................
ALBUM 7
...........................


Should be OK. There is content above 24khz.

...........................
ALBUM 8
...........................


The spikes go clearly above 24khz. You can also clearly see the added
dither at the top. Still I'm gonna load it into Audacity to verify it.

...........................
ALBUM 9
...........................


This is a native 24/48 file.

...........................
ALBUM 10
...........................



Hmmh. Looks like native 24/96. The spectrum looks pretty distorted though.

...........................
ALBUM 11
...........................



Looks OK to me.

...........................
ALBUM12
...........................



Looks OK to me.

...........................
ALBUM 13
...........................


Looks Ok. There seems to be some dithering done


...........................
ALBUM 14
...........................


I need to have a closer look at this one.

...........................
ALBUM 15
...........................



No idea what to say about that one. I'd say it looks rather OK.

...........................
ALBUM X
...........................

To be Continued


:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
END OF ARTICLE



Touch Toolbox 3.0 - HW and Network

$
0
0
This blog is Part II of the Touch Toolbox blog.


Over here  I'll touch upon HW modifications, the network and server environment from a HW perspective.

I consider all this an integral part of your streaming solution. All this will also make a difference.

1. Power supply
2. Lan optimization
3. SPDIF data link
4. Onboard modifications
5. Server considerations

You really should have a look at it.  You'll also  find some easy to implement tweaks.




NOTE:

I do not have any commercial relations to any vendors or manufacturers
mentioned in the text below. Yep. It's basically free marketing for those
who are mentioned here.



Let's get started.

  
1.  Power supply


The Touch switching power supply situation can be improved.

Don't be shocked. That would  happen to you even on >1500$ DACs btw.

The stock supply should get replaced with a high quality linaer regulated device.

The stock supply is dimensioned for a 3A load. If you don't use USB disks etc. you'll get
along with 1A supplies.

Though experience shows that's always a good idea to go for overdimensioned supplies.
 
     
My commercial PS recommendation would be an S-Booster Power Supply  at around 140€ (currently 230V only) .

The Sbooster filter (a choke and some caps), which they sell also as a seperate unit, I consider one of the key elements here. That filter is put just in the 5V DC line in front of your Touch DC power line.
This filter can be bought as a standalone device for a couple of $.
It even works well on my battery+SuperTeddyReg based supply - between Teddyreg and Touch.

I tend to say, that the filter makes more of a difference than the PS or regulators itself.  I had those TeddyRegs around. I wouldn't go form them any longer.

If you own a nice linear 5V supply already (EBAY sells those around 30$), you should try at least the Sbooster filter.

If you're a DIY minded person you'll find several recommendations on the net for building a similar filters yourself.
E.g. John Swenson made a nice  proposal based on a Hammond choke.


You can also scan other HW and PS modifications supplied by many companies, if  DIY is not your cup of tea:

Just Google:

Welborne Labs
Bolder cables
Audiocom
Teddy Pardo
Paul Hynes Design

(enough of marketing at this point  ;) )



Advise:

Install my Toolbox first. You might be able to save some bucks
on at least the HW mods. If the SW is done first you'll notice much
less of an impact of the HW mods.

I've done many of the HW mods myself. My experience is that those
mods like decoupling upgrades do generate a little more dynamic sound experience here and there.
If it's worth to spent 250$ to get it done by an external company is another
story.

I'd say a power supply upgrade you should  consider in any case.

Honestly I would have a slight problem to throw in >250$ for a PS upgrade
plus HW mods on the Touch.




2. Network optimization



In theory the your LAN/ethernet connection shouldn't cause any trouble.
If you ask certain "specialists" out there ( you can put me on the list -- I do have a telecommunications engineering  degree myself), there shouldn't be any impact
by the WLAN or LAN.


Those people are correct in a sense that not a single bit gets lost.
And further 20s (depends on sample rate) of data is buffered on the Touch.
So what's the deal. Aren't we on the save side!?!?
 
Yep - In theory we should!

However. Some people forget though that the buffer continuously need to be refilled and managed. That there is a NIC in front of that buffer.
TCP/IP on its own comes with tons of parameters. Which are by
default not set perfectly. Varying load conditions and other non linearities on the net will cause quite some data jitter. If and how much translates to noise of jitter
on the SPDIF link I can't tell. But that doesn't matter to me at this point.

The other argument of people questioning any ethernet impact is that each and every ethernet jack is galvanical isolated.  Yep. That's right.
There's a little transformer in each of the jacks.
However. Those people tend to forget that there's still a fad ground connection in place.
The ground itself is not isolated! That  ground feeds all network EM/RFI mess right into your device.
They also tend to forget that galvanic isolation doesn't mean HF isolation.

You should know that - my guess - 99.999% of all households don't have an ITU-T compliant grounding in place. The mains ground grid acts like a nice HF distribution network (antennas) and your Touch finally becomes the tip of that antenna. All that also gets through the "backdoors" - over the mains ground - into the other devices.



Let's try to nail it down. I suspect several  sources, that causing a certain impact:


a. Even though the ports are isolated by transformers, groundloops do exist.
    The connectors are not hospital grade like implemented.
    The cable shield feeds the ground into the device.

    Proper grounding in private home networks just does not exists. The cable acts
    pretty much like an antenna feeding HF into the device.

b. Poor connection due to flimsy ethernet jacks or connectors might cause
    a certain negative impact. The connector won't sit tight all the time.
    As a matter of fact on the Touch I experienced exactly that.
    At these high frequencies you can't afford a loose connector or poor
    connections. There'll be all kind of reflections and crosstalk present.

c. Different load conditions on the Touch caused by congestions, changing traffic
    loads, negotiations will do have a non-linear impact and might indirectly
    degrade the audio performance.
    This would get much worse using the internal WLAN connection.
    WLAN uses heavy encryption on top.

d. You're using an old style router.
    I figured and know from feedback that almost any router adds its own
    signature.

e. You are using rather low quality ethernet cables.
    (This is the normal situation in many households.)

What to do!?!? 

Network tuning for audio purposes was and I guess still is a pretty new subject. At least to me.
I havn't found any source on the net addressing the subject.

This is the situation how it looks to me from a todays perspective:



2.1. Cabling



Look for good ethernet cable.  You wouldn't believe it. The cable and its endpoints can make a substantial difference.

I hear you: "Please not again this audiophile cable terror" this is ethernet and we buffer 20s of music-data.

Here you'll find a review of German Stereo magazine. They concluded that
out of this test not a single cable (quality cable) complied to CAT6 (<250Mhz).
And except one cable not even complied to CAT5. (<100Mhz)
Further they concluded that unshielded cable consistently sounded best.
The German company Meicord made it rank1 in that comparison.
 
Those Meicord folks figured out certain aspects by now. Key elements are
supposed to be shielding (ground loop -- my guess)   and the RJ45.
(That's what I figured earlier). They  suspect a problem with 
characteristic impedance mismatch (good end/ bad end) of 100R and thus associated reflections. Crosstalk also seems to play a role.

Over here (in german) you'll find some more info about it.
What's missing though,  is an explanation why all this would impact the audio
stream on the other side of the transformer.


Budget advise:


You might want to try high quality cable CAT6A (=<500MHz). U/UTP stands
for Unshielded and S/FTP for Shielded.

 There are e.g. Draka UC900 S/FTP cable with Hirose Plugs TM31!!!
 (TM31P-TM-88P,)  they sell at a couple of $/€ over here in Europe.
 I paid 6-7€ for 1m.

 Note: I did remove the shield of my cable (it's an S/FTP) 2 inches away from the
           connector to avoid a ground loop into the Touch. That puts at least one
           leg of that patch-cable on ground though - and still makes a nice antenna.
           I also cut the 2 power and 2 ground lines. You just need TX+/- & RX +/-

 Advise: Watch out -- many of those cables come with e.g. Hirose TM21 instead
              of TM31 plugs! This can make the difference!



Audiophile advise:

A german company called Meicord was so nice to send me some samples for testing.

   
The built quality of those Meicord cables is extremely good.
The connectors look and feel rock solid.
My earlier used Drakas/Hirose compared to the Meicord connectors  look and feel much cheaper.
The cable itself is a bit stiff.  Though similar to my Draka UC900.

Now: Most important -- how do they perform?
 
I plugged a 1m unshielded patch-cord in between the Touch and my Cisco Hub.
I immediately experienced a clearly audible improvement. The resolution increased.
The overtone spectrum gained substance. You'll notice those changes pretty quick on brass instruments and orchestral music. 

The Meicords are again a step up on my Drakas. 


But please - Don't expect a day and night difference! You should experience what I described earlier.
The same experience I made is reported by a number of people. Its effect on different systems seems to be quite consistent.
If you intend to go after the best. Put that cable on top of your wishlist.
 

2.2 Ethernet Hub on wired networks


 Try a small active ethernet hub/switch in front of the Touch.

 Use a very short high quality unshielded CAT6A patch cable. The hub does
 some signal refreshing. (You wouldn't believe that this makes a difference.) 
 Meanwhile I use a semi-professional Cisco SLM2008 active hub with good
 results. I use it also for TV etc. The Cisco is a little intelligent boy more a bridge
 than a hub. You can even set QoS priorities on the ports.

  If you have a better power supply at hand you might also try that one
  on the HUB!!!



2.3. Ethernet/WLAN  bridging 
      

Quite some people can't get their network going without using WLAN.

According to my TT recommendation that would be a NoGo. WLAN usage
on the Touch makes a serious difference on soundquality.


I found an intersting solution - so called Wireless Range Extenders.

Those Wireless Range Extenders are almost plug'n play. You don't have to have
an IT degree to get those working.


The nice thing is that these devices usally come with an ethernet port that we can use
for our purposes. You can now connect the Touch via cable to that device and
can use my WLAN mod now.


Those range extenders are powerful special purpose animals. The performance is much more powerful than those WLAN receivers that you'll find in any of your devices.


pro's:

a. We can run the Touch in wired mode on the last 3ft.
b. We can do the  signal refreshing, as I'm doing it with the Cisco hub
    described above. ( As usual you might want to try a better PS for that extender.)
c. you'll get an overall better WLAN coverage.
d. you can also use e.g. the less busy 5Ghz WLAN band for better and more stable
    througput. 

but:
d. We might still face  dynamic bandwidth changes on the WLAN link.



Examples:
a.
Have a look at the Amped Wireless High Power Wireless-N Smart Repeater and Range Extender (SR300) (Thx Guidof from SBF  for that hint)
This product comes with 4 LAN ports. You put it right beside your Touch and connect it with e.g. a 3 foot Monster ethernet cable to your Touch.

b.
There's e.g. a Fritzbox FRITZ!WLAN Repeater 300E (no idea if these are sold overseas) that comes with just one LAN port (of course you could connect a hub here, if you'd need more ports) . The FritzBox comes with DualBand WLAN 2.4 and 5 GHz - You'd be able to use the 5GHz band.

Note: Please, let me know if there are better devices then above out there - I just quickly picked those whithout doing an in-depth market research.


Folks. It's really not that expensive or complicated.

The great side effect -- beside using them for our special streaming network purpose -- is that you'd for sure improve your overall wireless coverage and throughput in the house.


I might give it a try by myself. You never know. It might beat my current cable setup.



3. SPDIF via COAX or Toslink



This is IMO one of the key subjects on the HW side. You wouldn't believe how much impact this SPDIF link can have.

If you don't have your DATA link, the SPDIF link in particular, under control you'll face a rather high impact on your sound experience.

The SPDIF link can really be a nasty bottleneck.

Let's discuss Toslink first.

Toslink sounds OK if you use top quality glassfiber cable. (e.g. Lifatec Silflexx Glass Toslink). The biggest advantage is its great RF rejection/galvanic isolation.
You won't find anything better than that.

I recently figured that one of the key issues of Toslink is its poor implementation.
It IMO got such pretty bad rep because manufacturers don't put much focus on it anymore. The Toslink receivers and transmitters are active elements, which
transform the digital datastream twice.

If you look at e.g. the SB Touch transmitter, you won't find a buffer and decoupling cap at the transmitter. Power will not be stable and distortions can easily enter the datastream.
I changed that.  On both ends. I hooked up 220uf OSCONs on each side and a little 0.1 MKP capacitor.

Since then I'm running Toslink.



A well done COAX implementation (the whole link from sender chip to receiver chip)  might beat the Toslink connection. ( I tried several pulsetransformers and they all caused a different signature to the sound). There are manufacturers selling transformer coupling as "isolated". That's misleading.
These pulsetransformers lowering distortions in a certain area. Many of them (RF/EMI) just jumps over the transformer.
The real life problems with such an implementation are more then challenging.
There are galvanic RF connection issues, rather poor 75R characteristic impedance implementations, unmatched terminations causing severe reflections, poor connectors, poor cables, cable lengths discussions asf. asf.

All those above issues will cause extra jitter and noise. And that jitter adds to the receiver jitter generated by the receiver chip itself.
As far as I can judge the Logitech part of that implementation looks pretty basic. They skipped a pulsetransformer on the RCA output, as you'd find it on high quality SPDIF implementations. The RCA jack is not 75R compliant . It runs roughly at 35-50R usually causing nasty reflections. Inside the Touch the path follows more connectors, soldering joints asf. All this will cause more reflections and signal degradation.
These are the first very obvious shortcomings. Any 75R compliant jack  BNC or RCA would have been the better choice.

Three areas can and should be improved:

1. Galvanic isolation (only needed if the DAC won't come with a pulsetransformer on the input) - you still face lowered EMI/RF distortions though.
2. impedance mismatch
3. power supply / decoupling

Note:
Don't get fooled by marketing around super expensive SPDIF cables. Trust me.
The endpoints are usually the main problem, not the cable.
A quality el cheapo Belkin with the well implemented endpoints usually would do.


Recommended SPDIF mods:                     

I applied some modifications, resulting in  a worthwhile, I'd even call it, serious improvement.


a. improving PS of Touch incl. choke filter (CLC/PI) and all electrolytic caps on the board
    as a base mod. This has an indirect impact also on that digital ouputstage.
b. direct wiring of spdif cable to the mainboard (bypass Touch RCA and and internal comb
    connector . I cut the link at the comb connector off).
c. introducing a reference pulsetransformer  (e.g. Nevawa S22160 Digikey)
    (on the receiver side (DAC) only and not more than one per link!!)
    Be careful with those transformers. They all will add a different signature
    to the sound. I tested around 6 of them. Some were called audiophile at audiophile
    prices and failed to prove it in my setup.
d. using a good quality but not overexpensive 75R coax (12inch). If you pay
    more then 10-20$ you IMO paid to much.
e. I pushed it even further: Both ends are hardwired now - no more
    connectors on the link. That's IMO as good as it can get.
f. My Toslink receiver/transmitter got some caps soldererd to its power pins. 



4. Onboard modifications


Once you have the Touch disassembled, and you consider yourself a rather experienced HW tweaker, you can continue by applying certain  onboard modifications.
Most of the professional modification companies will do pretty much
the same stuff. Better decoupling, better regulation, better clocks.
Usallully at >250$ ( e.g. Audiocom) . Some of those even offer rather esoteric Bybee mods.

Caad from Squeezebox forums did some extensive baking on the board.
He even had access to high quality measurment equipment to verify the
results of his mods.

To work on those highly integrated boards with all its tiny parts and
multilayer boards needs a lot of experience and good tools.

You don't seriously want to fry your 300$ invest for swapping a cap or two. ;) 




4.1 Analog Output Modification ( For DIY enthusisats)



The Touch comes with a rather decent DAC an AKM AK 4420 inside.
If you've got all mods from above applied it'll sound quite good to be honest.

As usual you can work on its decoupling, output stage and powersupply.
I'm not sure if a clock upgrade would make much sense. The DAC has its limits.
You need to be careful not to waste too much money on it.

I just removed the coupling caps, which do heavily impact the signal path.
That mod IMO is a must for those want to run the Touch on its analog outputs.
 It really lifts the output two notches up.

Again this is something for someone experienced.



4.2 Decoupling


I won't go deeper into the subject for now. Applying OsCom Caps in "Lampizator" style will noticeably improve the performance of your device.

That's btw one of the main mods that all those professional modifcation companies apply.


I'll continue to add some more information sooner or later. Above referenced proposal of Caad pretty much covers the exercise though.


5. Server


That's also a nice subject. Let's discuss which server and server setup to choose for best performance!?!? I'm taking about sound and not processing power. ;)

Server!?!? Why are we talking about a server here. Yep. Many of you wouldn't expect that
a server could be a potential tweaking target. To be honest I wasn't that expecting either.

I did  compare many of those setups, I can tell you they all "sound" different.
I know. That's not funny.



I don't want to speculate why it makes a difference.  We're talking HW, OS, drivers and applications. Somehow anything can make a difference.

Now. What's my personal solution.

I'd differntiate between 


a. NAS (usually a very slim  Server-Linux) running LMS
b. MAC-OSX
c. Windows 7
d. Desktop Linux
e. Server-Linux


5.1 Type of server

I prefer the flexibility of a "normal" PC running the server software.
I can do lot more stuff with it than using a stiff NAS.
A NAS usually lacks also months/years behind of normal developments
on the SW side. You won't see many upgrades taking place.
I do consider the support life-cycle of those NAS devices a major problem.
To me these NAS are black holes. I keep my hands off those devices. (I made my experiences). Also forget ARM (RPI/Cubi/Udoo/...) devices. As soon as you want them to run a challenging task (database rescan, realtime DSP, transcoding, sampling)they run out of steam.


If I talk "normal" PC, I'm talking about quiet & fanless machines.  These are IMO even much much better then Quad Core Wand /Udoo and alike. Obviusoly you can use them aslo as playback client. Beside that you'll have much less hazzle with SW.


A headless type PC like a Zotac ZBOX  ID82 ( core i3) /AD04 or NUC or similar, which comes with a real CPU - please avoid the ATOM or ARM stuff - could be a pretty good choice. You'll usually have (e)Sata and USB3 ports at roughly 250$-350$.

If you consider to run also a DLNA server feeding your TV set with HiRez videos. You might  consider something that comes with muscles. You need
to have some air left for heavy realtime transcoding work.
I'm currently running an i5 processor with 8GB RAM and SSD as
system disk. PS is a Seasonic 400W fanless PS. ARM on Atom will fail at this point!!

As a goody I'm running and IBM 1000 pro network adapter (PCIe). You'll find
them used for a couple of $ out there. It really made a difference.  

Beside that...

Make sure that you have you MB drivers updated.

If you manage -- don't use a 5 year old leftover MBO/PC.

What I also like is the WD Western Green 2TB 2,5'' HDD. It's 15mm high!!
It's quiet and small.

If you don't have a budget issue I'd recommend to go for 512 GB Samsung SSD.
You'll get >1200 flac CDs on it. That should be more than sufficient for most
of you out there.

AND: Always count a 2nd/3rd set of disks into your budget. Even though it hurts.
          You need healthy backup disks. Read my "Silent Death post"



5.2.  Choice of OS


I experienced that all OS sound different due to whatever reason.

The typical suspects. Inefficiency of OS (incl. networking stack) and driver issues.

There were times I went from Linux to Windows 7 to Windows 8.

Now I'm back to Linux - Ubuntu Server 14.04 minimal with custom rt-kernel.



Most of you will run a Windows system as server.
I highly recommend to try some of the known OS optimizations, that you'd
use if you'd use the PC as transport.
Go for Windows 8 ( better then Windows 7) and start e.g. Fidelizer.
You'll notice slight differences.



6. Wrap UP

I hope I could get you at least some ideas on the HW and network side.
With some of the HW mods you might gain rather substantial improvements.
But again. The better your DAC, the less impact you'll notice. The more bottlenecks you've got in the chain - the less impact you'll notice.

The SW and HW mods that you'll find described over here make a nice sounding, future proof and still affordable streaming solution.

Looking into your overall todays and near future server and network situation makes pretty much sense too.

Consider that you'll going to hook up  more and more streaming devices (TV, BluRay player asf) to it. All these will put higher demands on your server and network.


Enjoy
Viewing all 109 articles
Browse latest View live