Wednesday, 17 February 2016

"Hi-Res" recordings... part 3 (the Vinyl revenge...)

Okay... so now it's time for the turn of the vinyl.  For a while my LynxTWO has been out of action (the input stage has failed multiple times over the past ~14 years, quite annoying on such an expensive card... it will eventually get fixed again one day when I can be bothered...), so I had to press the E-Mu 1820M into action instead.  The noise floor is slightly lower than the Lynx, but the distortion is slightly higher.  Still plenty good enough to be compared to the other sources, as it uses good quality AK5394A A/Ds.  I will be using my old John Linsley Hood shunt-feedback phono stage, powered off the original power supply.

For a while now I've been using a Lyra instead of the Shure V15VxMR with JICO SAS stylus... while the V15+SAS is a superb combination, as is often the way with audio, when you hear something slightly better, it's hard to go back.  The Lyra has a very wide frequency response as in common with most high performance Moving Coil cartridges... the distortion is also very low, I have measured it to be lower than the Shure, which is somewhat more unusual for a MC.

I recorded each track, and then resized as best I could, matching the RMS power to make it a representative test.

First up is the 4Beards reissue, 4M101...


You can immediately see that there is a lot more transient energy above 22K (as in, it actually has some) than with the HDTracks download.  The mix and sonic tone is very similar to the HDTracks... next I took out my early copy of the Atlantic LP, SD8139.  It isn't in perfect condition, but it's very hard to find a mint one these days... the cut is almost identical in loudness to the 4Beards reissue, but the band compression seems to be slightly different.  Big difference in frequency range...


Okay... that's a lot more going on at the top!  Mix isn't quite as spacious, but there's a bit more bite to it... I guess that's due to the extended top end.

Comparing all of them in the 10K-40K range across the whole track...


But it becomes most obvious when you zoom into a small space in the music....


... just how much information the so-called "Hi-Res" version of the music is lacking.

So all I can really say is, buyer beware... my own recommendation is that unless you can be certain of the provenance of the high resolution material, you are better off finding a good pressing of the vinyl.  If we consider the 4Beards reissue for a moment, for me it sounds better than the "Hi-Res" from HDtracks and is actually about the same price to buy.  And you get a real "thing", which you can keep or sell at your leisure.

So why bother with the download?  Good question.  Convenience might be a reason, but bear in mind that you can get a CD of this album which you can rip in a few minutes, and by all accounts will not be inferior to the 24/192 version - at the time of writing this, there's a copy for 3.46GBP on eBay.

I'm getting my hands on an early version of the CD and will compare it to the 24/192 HDTracks in due course, but I think we're done with the surprises for now... :)

Monday, 15 February 2016

"Hi-Res" recordings... part 2

So I headed over to www.hdtracks.co.uk for the first time in what was a very long time... so long in fact that my account had been quietly closed!  Would have been nice if they'd mentioned that.  In any case, the reason I had stopped visiting is due to licensing restrictions... I couldn't buy anything I wanted to buy as it wasn't available in the UK.

Nice to see it has been sorted out, the vast majority are now available.  So I purchased a handful of albums, at considerable cost... most of the albums in 24/192 seem to be priced around 18GBP - for a well known classic, you could probably pick up a second hand CD in the region of 4-5GBP, so this is quite a premium for the privilege of downloading a few files.  The 96kHz downloads are slightly cheaper, but I wanted the 192kHz for downsampling tests with FinalCD.

I listened to the albums I knew well, and I have to say that I was somewhat underwhelmed.  I have encountered this with "Hi-Res" recordings before, as I said in Part 1... there are a couple of reasons why this may be, but they weren't awful by any means.

In particular, I listened to Aretha Franklin's "Never Loved A Man The Way I Love You" in 24/192... it sounded, well... pretty poor.  Maybe not a great recording.  The 4 Beards vinyl version I have was not quite as rough.  So I went and had a look at the FFT for track 1...


Not a lot happening above 22K or so there... what about an average across the whole file?  You wouldn't expect it to be perfectly flat as that implies random noise which will cancel out...


Hm.  Nothing.  At all, just noise with a few idle tones.  I had a look through the whole file and there's nothing up there other than noise and a stray tone, centred on 76.8kHz, presumably from the A/D converter.  Here's the spectral, focusing on the 10K-30K band...


Hm.  Not looking good.

There's no question in my mind that this has been A/D'd at 24/192, the noise floor is too strange to explain otherwise.  What is rather open to question is what was the source feeding the A/D converter.  For an analogue converter to brickwall like that would highly, highly unusual.  I checked all the other 24/192 recordings I bought, and there was nothing like this... for example... Joni Mitchell's All I Want...


... looks natural and genuine.  Of course it is possible to "fake" a Hi-Res recording but the Aretha Franklin looks to me like it could have been taken from a CD source... and if so, a slightly ropey D/A at that, one with a very high noise floor, given the low pass filter visible in the noise floor.  That might would suggest a 1-bit converter from the mid 90s... or it might just be a very noise reel to reel tape, who knows.

I contacted HDTracks to complain, and their response was disappointing at best.  They pointed out that they do not record or master the tracks, no-one else had complained about the album, and that if I had a problem, to take it up with the record company.

I pointed out their page about Quality commitment, which drew silence.  It seems they are very happy to take your money and then point the finger at someone else when a customer questions the quality being offered.  I find it hard to accept that the Aretha album can be called "Hi-Res"... either the master source simply has no content above 22kHz (which I suppose could be possible) or the source for this "high-res" master is actually a 16-bit 44.1kHz or 48kHz digital copy which has been played through a poor quality D/A and captured in 24/192 to pass HDTracks' "quality tests".

Even a cursory examination of the spectral analysis should have flagged this up (which HDtracks claim to do in their Quality commitment page), so it is clear that HDtracks do not vet their files very carefully, despite what they claim.

To try and get to the bottom of this, I'm obtaining some early vinyl of this classic album to see whether there really isn't a version out there with content above 22kHz... it will be interesting to find out!  I also have an early Japanese CD version of the album coming to compare the general sound quality with.

Something to bear in mind - when you pay for downloaded music, you have nothing to "sell on"... if you are not happy, you may be lucky and get a refund.  If not, you would appear to have little recourse.  It seems to be known that the "quality tests" at HDTracks have varied results - I doubt they are unique in this as they do not generate the material, only sell it, but some baseline of quality was to be expected from a company coming from Chesky...

I do hope these companies start to take quality a little more seriously, as it rather undermines Hi-Res downloads as a whole and will eventually unravel their massive margin when people realise they a) can't be sure if they are getting something better than CD quality and b) they are left with a rubbish bunch of 1s and 0s and a hole in their bank balance...

Frankly, rather than spending 18 quid on a download, get a decent physical pressing of these albums on Vinyl, where possible.  It may cost slightly more, but you will have something which you can enjoy, something you can touch, and in a lot of cases something which will actually retain value.  Your downloads are worth $0 once you have paid for them... !


Saturday, 6 February 2016

"Hi-Res" recordings... part 1

I had been hoping to get the wedding blog post series done by now, but as usual life/work gets in the way... time for an audio intermission... !

Some of the viewers of this blog may be aware that when not at my full-time job, I spend rather a lot of time working on audio... way back in, crikey... must be 2001, I was commissioned to develop the software for the Zero One Ti48.  This was one heck of a way to do my first commercial product, and is where I really cut my teeth on doing practical, high quality audio DSP, after my dalliance with digital crossovers back at university.

In some ways the Ti48 was way ahead of its time.  It allowed you to rip CDs to an internal hard drive back before the concept of a music server had materialised in general use.  While it may have been based on a PC architecture, and it was criticised in some quarters because of that, it was to misunderstand the work that had gone into the concept and how the quality was far beyond what a typical "PC player" could achieve and could offer truly "high-end" sound quality, through a combination of the right hardware and software.

One thing that was particularly unusual about the Ti48 as an audio transport was its ability to play up to 192kHz material.  The only problem was that back in 2002, there wasn't any 192kHz material to play!  Audio A/D converters capable of doing 192K back then were rare, and probably custom designed, or re-purposed from another intended use.

96kHz capability had been around for much longer, probably hitting the mainstream back in 1993 with the Pioneer D-05... I do remember when this came out, and it seemed very exciting to be able to cover well beyond the hearing range to allow for improved digital processing and avoid the hairiness in the top octave - the reviewer marvelled at how much more natural the tape hiss sounded...  It took a lot longer to make it to other recording equipment, though...

While it was very cheap to make an existing 48kHz delta sigma A/D do 96kHz - you just do less decimation at the end... this wasn't really optimal for performance as you ended up with a lot of shaping noise where your new octave was meant to be.  It really required a redesign of the modulators and in some cases faster bit clocks to achieve a "true" 96kHz performance, but the potential was there.

This came in very useful for the advent of DVD... you may ask why?  Because DVD was the first "HiRes" digital format in wide public consumption... the story goes that the chaps at Pioneer managed to sneak in 24-bit 96kHz support to the official DVD specification... given their previous form with early 96kHz products, this makes a lot of sense - they felt it was beneficial, and having the main delivery format for films supporting it would put a huge number of players out there.  Very wise. 

Players were not forced to play 96kHz directly as I recall (they were allowed to downsample to 48K) but all must be able to play a 24/96 disc.

I got my first DVD drive in perhaps 1998 or so from Creative... it was bundled with a big Dxr2 MPEG2 decoder card, as most PCs of the time were too weak to be able to decode smoothly by their own.  Standalone DVD players were still fairly expensive at this time, so adding one to a PC was a reasonable solution for DVD watching.

At the time, I wasn't aware of the 24/96 capability - I was mostly buying it to watch films in a quality never encountered before at home... but some people were looking into what was possible...

One in particular was David Chesky... Chesky Records (along with Classic Records too) put out some of the earliest 24/96 DVD-Vs... these basically consisted of a static video frame which was then followed with pure 24/96 audio... DVD couldn't guarantee the bandwidth to offer more than 24/96 stereo in PCM, but this was a massive step up technically from what was available in the past.

Playing one of these discs on a computer used to be a proper pain in the backside.  What I ended up doing was extracting the raw data from the VOB files and then running a bit rearranger as the samples were packed into a strange order - this was worked out through trial and error on my part!  Then I had a normal 24/96 WAV file... while I had been able to record 24/96 since 1999, this was my first opportunity to see what a professionally recorded hi res recording looked like... indeed, on the FFT there was life above 22K after all!

While most large diaphragm microphones struggle to remain flat, there is still plenty of energy going up there particularly for impulsive/percussive sounds, and while we may not be able hear these through our ears very well (bone conduction is another matter), humans are remarkable at hearing inter-channel differences... so it seems worthy to try moving up to a higher sampling rate from a delivery point of view.

What's the catch?  Well, you need a lot more storage, and you make the jitter problem worse.  Combined with the requirement to optimise a converters' characteristics for the higher rate, this means a converter may well sound better at a lower sampling rate.  An interesting test of this is to downsample high resolution audio... I developed a program called FinalCD to do just that.  It is a clunky, old-school command line program but is fairly well regarded in terms of its sound quality.

Certainly, I designed the sharp filter to capture as much as possible of the original 96kHz signal into the 44.1kHz sampling rate limitation of Compact Disc.  While it would be possible to go more precise still, it is really pushing close to the limit of what can be crammed on there and is technically close to perfect.  Many years ago, perhaps around 2004, I used FinalCD to compare a 24/96 recording to a 44.1 downconversion of the same material.  In the same player, the 44.1 sounded better... in a different player with completely different transport/DAC architecture?  Same result.  The 44.1 just sounded more musical.

This didn't make any sense at the time, but as mentioned above, this is not hugely surprising when everything is taken into account... running a D/A at a lower sampling rate increases the tolerance to jitter for reproducing the waveform correctly.  You are trading the ability to time the signal transitions correctly effectively against the settling time or amplitude precision of the D/A... this is precisely why delta-sigma converters suffer so much from jitter, as they need to run much faster to make up for their lack of raw resolution, often only composed of 31 or so elements... less than 5 bits.

In any case, time moves on.  Since developing the Discrete DAC many years ago and combining it with custom digital filters and dither running on my Ti48 equivalent, I've been fairly content with the quality of my CD playback, with no big steps for improvement... the limitation seemed to mainly fall on the source.  Now I am an advocate of the potential of 16/44.1 and feel that it has been hard done by for many years with some truly terrible recordings and masterings (perhaps done under duress in the latter case), but there was always the nagging feeling that a bit more bandwidth could help if done right...

Aside from the work done on Sunrise, improving my analogue replay massively over the past year has perhaps shown better where CD would ideally be than any high-res recording had done so to date... so it was time to do some more investigation into the possible reasons.  To do this, I'd need some more  "Hi-Res" material... ideally material I was familiar with and already had on multiple formats - it might help to shed some light...

Monday, 28 December 2015

Production notey...


First Prototype
 
It took a bit of coaxing to get the unit up and running... first attempt at display looked something like this...


Hm.  Not great.  It turned out that a couple of ground traces hadn't made their destination on the PCB due to a borderline failed ground plane fill.  A couple of additional wires sorted that out.

Ah... much better...


Now getting somewhere... nice to see some text from a real board.  But those faults need to be fixed before the production run...



Designing Revision 0.2

The first revision of the PCB had no user buttons at all (only a reset), so the only means of interacting with the unit was via. BTLE... given not everyone attending the wedding could be assured to have a BTLE capable Android/Apple device, it seemed like a good idea to add buttons so some functionality was possible.  After some research, I found some ultra-miniature tactile buttons at Mouser and was even able to cram two in.  In hindsight (as with the engagement ring - you'd think I should have learnt my lesson) more isn't necessarily better... the buttons are quite hard to press and have a very short travel.  The footprint was also quite marginal so the soldering was a bit hit and miss.  There was a reason to try and cram on two buttons... you need at least two to have any adjust/set functionality without getting into short/long press territory... a particularly bad idea with fiddly buttons!

Another slight mechanical problem with the original revision was that the micro USB connector was slightly too high up and would foul on the Bluetooth module.

With a bit of rejigging I was able to move the micro USB connector it lower down and also move more components to the front of the board... this is good as only the top side was reflowed - the bottom side is all soldered by hand!

v0.2 Schematic
If you look closely you may see something else that was added...

I eliminated a few passives to make the layout slightly simpler and make it less tedious to build... this is the back panel - new buttons aren't visible here as they are on the front of the PCB...

Back PCB layout
 
The new revision didn't introduce any new design problems, but the footprint of some components still was an issue... particularly the 0402 capacitors.

Many experienced PCB designers (far more experienced than I) will tell you this, but it bears repeating as I haven't eliminated making the same mistake... *always* check your footprints.  While I didn't make any howlers in this regard on this particular occasion, I had relied on my PCB package (Target 3001) to provide some of them.  Most are fine, but even these standard profiles have a variety of tolerances, ranging from "tight" to "loose"... the 0402 profile was very tight... this meant the footprint was almost identical in size to the component pads themselves.  While this isn't a fundamental problem when performing reflow when your process is optimised and all factors are understood, when you are using an unknown solder paste, it becomes more of a problem.  When the flux is inadequately aggressive, the components dirty, or other stuff is just "not right", the component will not adhere to the paste and as a result will not be soldered correctly.  These parts might look okay on a first inspection, but as the footprint is so tight, you cannot look for solder fillets as proof.

What I found was that some 0402 caps would simply fall off the PCB when introduced to the slightest shock... or some PCBs would appear to function... you'd put them down, test them again, and then they wouldn't work any more... very frustrating!  Pretty much all the problems I had with board failures is down to these 0402 caps on this footprint that is a bit too small.  As there are so many, it is quite time consuming to fix!

Building 0.2

This time I had a proper template to hold the PCBs in place...


And back to the pasting...


And the component adding...


After reflow of the first v0.2 set...


It took a lot of troubleshooting, but eventually one board was kicked into life... even, with the Bluetooth module as well...



As it got closer to the wedding day, it became apparent that doing one for every guest was not going to be feasible... not because there weren't enough components and PCBs - there are plenty, but the problem is troubleshooting faulty boards.

The production boards were from this point onwards built up in stages to help with this.  First stage was connectors and capacitors.  This proves power would get to the MCU and there is no misalignment (which could potentially cause the MCU to fail).

Next, the CPU was added.  The reason why the CPU was not reflowed is because with the initial prototypes, a lot of solder bridges were encountered... it was actually more time consuming to sort this out than it was to  just drag solder the QFP in the first place, so that was the strategy employed from then on.  Also, it was suspected that reflow might cause some MCU failures due to the package epoxy being susceptible to moisture.  This problem is not as significant as it would otherwise be due to the lower melting point of the bismuth based solder paste, but it still could possibly be enough to make the package crack due to the water vapour escaping.



The normal solution is to bake the parts for 24 hours at a medium temperature prior to reflow.  Unfortunately I did not have 24 hours spare!  In hindsight, I should have ordered a small number of components for prototyping, and then placed a second order for the production run, thereby avoiding opening the package of MCUs and exposing them to humidity

When drag soldering, while there isn't the same even temperature gradient that reflow will cause, the heat is only being applied to the pins, which act as a form of heatsink.  The drag soldering can be accomplished on each side in a few seconds, so is very quick and exposes the parts to minimal stress, particularly at the low temperatures of the bismuth paste.  I found I typically soldered the parts at around 200-220C... if I was doing normal lead-free work, the iron would normally be set between 320-350C!  I hate this for PCB work as it is plenty hot enough to break down the adhesive between the copper and FR substrate, which causes the traces to lift.  While I've had many problems with this before (particularly with 1oz/25um copper boards), operating at the lower temperature meant not one trace on all 42 PCBs ever lifted.

I should briefly talk about the disadvantages of bismuth solder... it does have a faster rate of expansion/contraction than regular lead free (can be particularly a problem with smaller SMDs, putting them under more stress) and also can have issues with mechanical fatigue.  The fatigue problem as I understand it can be addressed through the addition of silver to the bismuth paste compound, but this paste is considerably more expensive... the BOM for the favours was already fairly high, so it seemed sensible to work with the chinese-sourced paste if it worked.

While the flux used was not specified, I ended up using a lot of Roisin-based flux from Chip Quik to retouch the boards.  This flux is slighty tacky so is quite good for rework, if a bit messy.  It is sold as a "no clean" flux but cosmetically it doesn't look great.  It can be effectively removed with some isopropyl alcohol.  I ended up dipping all the boards in IPA to remove residue - as there were no "wet" components on the boards (like electrolytic capacitors) this was considered safe to do.

So... the second stage was with the MCU on... the vast majority of boards showed life here... as I'd stuck with the Leopard Gecko, it has both USB capability and an on-board 3.3V regulator.  While it was already an obvious choice to have a MicroUSB on board as they are such a ubiquitous connector, having the USB capability meant the device could be flashed over USB using the provided bootloader.  This is really handy as it proves most of the MCU is working without any need to write a full self test.  If I had 3.3V out from the regulator, I would then connect the output of that to the VMCU... it can provide 50mA which is enough for the whole board.

With the CPU working, the next stage was adding the EPD (electrophoretic display)... this is the trickiest stage of all thanks to the troublesome 0402 capacitors.  The failure rate of boards at this stage was very high, and was a big contributor to not being able to give everyone a board... as it turns out, that probably was better for further stages, but it was a source of great annoyance at the time!

The ZIF connector for the EPD was a little fiddly and could lose its extended tabs after too many insertions... I found that sliding a fingernail along the length of it was one of the safest ways of removing the flat flex of the EPD.
Assuming that the EPD worked, the next stage was the Bluetooth.

As had been hoped, the BTLE modules were trouble free, which was well worth the extra cost given the problems I had with the EPD caps!

On the day before the wedding, I had some great help from Anna and her friend Kate to complete the assembly of the units, which was mainly adding the battery holder, and putting in batteries.

Before the wedding day (and I mean, literally just before) I had written a basic test to prove the EPD and BT, and packed up all the boards before we got a taxi to the hotel... the software didn't really exist yet, though!

Friday, 30 October 2015

Notey starts to become real...

I had always intended to do two revisions.... a prototype and a production one.  This is because the probability of no mistakes seemed pretty low.

The schematic for the prototype v0.1 board needed to be done very quickly as there was a very limited amount of time.  I made use of application notes to decide how to lay out the USB power switchover and support circuitry, but had some snap decisions to make with regards to minimising development effort...

Bluetooth compromise

As previously discussed, I wanted to use the nRF8001 for the high level of simplicity... and had originally intended to do my own implementation - it does not require that many support components, and there is at least one balun specifically tuned to the requirements of the nRF8001 to give best RF performance (a lot safer than using a bunch of individual passives, and probably better), plus a chip antenna.  However, once again, time constraints were against me.  Doing my own BTLE solution was introducing another potential thing to go wrong, even if there were some good reasons to do it (higher performance, lower cost).... they were a number of suppliers of pre-tested nRF8001 modules using a PCB antenna and all the soldering and passives already done.

I had originally tested with an Adafruit module but for production decided to go with Olimex as they offered a quantity discount.  There was another advantage to using an off the shelf BTLE module... it meant the main PCB could be made much smaller, which reduced cost to somewhat offset the module expense.  From looking at the Olimex board, the best way to connect to another PCB was through the double sided test header pads.  The two boards are connected through a simple 2.54" header... a small routing indent in the PCB was made for the plastic connector spacing to sit, so that the profile was not made any bigger than necessary.

The Olimex BTLE module
Prototype Schematic and PCB Design

I did the board in Target 3001 as I usually do... this was my first design with the new V17 - a few things are better but it's still as quirky as ever and not exactly the most stable piece of software I've encountered, but it is my favourite of all the ones I've tried.

For the first attempt at a schematic, I definitely erred on the safe side when it came to most design decisions, but for the PCB I did need to make the board as small as possible to keep the cost down, and also limited to double sided copper.
v0.1 Schematic
v0.1 PCB Layout
The PCB layout was a bit of a challenge... Anna's wedding ring was difficult, but this was at least as dense and producing quite a few of them!  While 0402 components are small, they have the disadvantage that you cannot run traces between the pads very practically, so you don't save as much as you might think.

I normally try to avoid 0402 capacitors, but to keep the board to a 40x40mm size, there was no option to just use 0603... the EPD alone requires 16 support capacitors and numerous discrete transistors.which makes the design quite tight.  One good thing about 0402 is that it makes it easier to put the caps exactly where they are needed, which simplifies the layout... on a small board, the last thing you want is to need multiple vias just to route a decoupling cap... while most of the support caps do not require super low inductance as are not for high frequency use, it is still good practice to try and minimise it.

While I could use both sides of the board for components, it didn't seem practical to reflow the board on both sides (the risk of components falling off on the second reflow unless you use a solder paste with a different melting temperature), so the display side would all be hand-soldered.

Rendering of (mostly!) completed v0.1 board
So I sent off the board files for a super-quick turnaround and got them back the same week...

Prototype PCBs
PCB with the nRF8001 for size/location reference
So onto aligning the stencil... this is a real pain to do when you have multiple parts with 0.5mm pitch, particularly on the QFP as it has to be right in both directions.  When I was happy it got stuck down with kapton tape using some other 1.6mm thick PCBs as spacers.

Stencil alignment
Applying the paste was tricky.  It helps to stir the solder paste to warm it up a bit as then it flows a bit better.

Applying solder paste
The end result of the squeegee action was a bit rough and ready.  I wouldn't quite call it a "high definition" result but it was a first attempt after all.  Made me think quite a few solder bridges were likely...

PCB with paste applied
Well, I'd find out soon enough.  Time to place some components - I used my tweezers to carefully start putting all the major components into position.

Just Enough Essential Parts?
And into the reflow oven it goes... now it's a waiting game...

Low temperature reflow profile in progress
A few minutes later, the oven beeped and I opened the drawer to have a look..

Dodgy reflow...

Ah.  Not quite the outstanding success I was hoping for.  A few of the SOT-23 transistors have reflowed, but almost nothing else.  Gah.  Okay, so the low-temp profile clearly isn't hot enough, either due to the oven temp sensor being off or due to the paste having different characteristics.  There didn't seem to be any harm in giving the old-school leaded solder profile a try just to get something up and running - the temp is still much lower than lead free and should put less stress on the components.

Dodgy reflow of a different kind
Well... it's definitely reflowed, there's no question of that.  Unfortunately my fears regarding the paste and the tight pitch spacing was very much confirmed - lots and lots of solder bridges!  This had to be sorted out by hand with some flux and some desoldering wick.  The profile and footprints will definitely have an influence on this, but it could be that the consistency of the paste is not well suited to such fine work.  In any case, I needed to try and get a prototype board working...

Tuesday, 27 October 2015

Notey design and production aspects

And so to the matter of creating a real design...

The hardware was originally going to be based around a Gecko G210F128, but when I prototyped with a G210F128, I had problems with the E-ink timings that were not happening with the LG... for that reason it was decided that despite the cost, it was more sensible to stick with a solution that actually worked!

The challenge with the LG was that it is a more complicated part which has a higher number of pins... the original targeted Gecko part only had 32 pins and was a small QFN package.  I'm generally not a fan of leadless packages as it makes it much harder to see whether the part has soldered correctly, but it woud help to conserve precious PCB space.  The smallest LG I could find was 64 pins, which is quite a lot... thankfully the LG was also available (albeit with limited availability) in QFP format... pins - great!  This is particularly helpful as it is allows for hand soldering.

Soldering

In any case I had decided already that this project would be a great opportunity to try out reflow soldering, so had purchased a Chinese reflow oven from eBay.  These ovens are very good value for money but have two major problems... the first is that the software is terrible... highly unresponsive and the temperature quoting is dicey at best... the second is that the controller board is electrically insulated from the main oven with a large quantity of masking tape.  The masking tape gets hot and then the adhesive starts to stink, as it is not designed for high temperatures.  The usual fix for this is to remove all the masking tape and replace it with kapton tape, which is designed for high temperature - thankfully the eBay seller had already done this.

When I got the unit, I had a look inside and decided to also add a 1-wire temperature IC by where the temperature probe comes in... this allows for better temperature compensation and as a result the unit temperatures could be made more accurate.  This is no use without new software, though!  Thankfully an enterprising engineer has reverse engineered the oven and come up with a complete firmware replacement... this is as simple as using a serial cable and pulling a test pin high.  The firmware flashing worked perfectly, although it was a little disconcerting that the oven and fan were on full blast during the process!

Unmodified board
Cold junction compensation added
In order to prevent PCB damage, I'd decided to try using Bismuth-based solder paste rather than the usual lead free mix that is mostly tin... the Bismuth based paste is available on eBay fairly cheaply and has a much lower melting point.



A few things on this... the "Soda" paste has a somewhat unknown provenance (at least to me) and I couldn't find any published temperature profile.  Profiles can vary anyway depending on the thermal mass of particular designs, so I started with a low temp profile and then modified it.  I haven't checked the oven calibration, but it seemed to struggle to reflow my fairly small boards, so I ended up using a leaded profile... a higher peak temp than bismuth should need, but still much lower than a usual lead free which can peak above 240C.

I did have some problems with areas where the paste was excessive or where there wasn't enough.  At least part of the blame for this is to do with the footprints for the parts. 

Power

In terms of the rest of the design, we needed some kind of power source.  So initially coin cells were considered... particularly the ubiquitous CR2032 cell.  While the CR2032 has a very impressive capacity for its size and very low self-discharge, it has fairly high ESR which makes it poor for applications which require high pulses of current.  Just pulling 10mA creates a voltage drop of 200mV near the beginning of the cell life, and towards the end of cell life, the ESR increases substantially causing a much bigger voltage drop.

Part of the problem could be worked around using a supercap, but if the system ever got into a loop where the pulse rate could not be managed, the cell could get very hot and unmanageable.  Paralleling cells would reduce the ESR problem but is not an ideal solution, and if using disposables is also quite wasteful.
So my thoughts turned to using a lower ESR power source without these problems... two major candidates - lithium ion/poly cells or low self discharge modern NiMH batteries.

Lithium ion cells are widely prevalent and not too expensive on the whole, but the big concern was with discharge... when a lithium  battery is abused, it can be ruined, and then replacement becomes an issue.  NiMH has considerably lower energy density than lithium cells, but are freely available and easy to replace if they are damaged, so with the modern LSD technology looked like a good option - their typical self-discharge is also considerably lower than a typical lithium cell... very important for an application where the battery won't be used for periods.

The significant disadvantage of NiMH is that they are complicated to charge well... there are a couple of techniques, one of which requires monitoring temperature, and the other (delta V) looks for a voltage drop to indicate the battery is full.  This only works well when the battery is charged at a fairly high current, otherwise the drop is much more difficult to detect.

Doing a discrete circuit for this is a real hassle... a microcontroller can handle a lot of the burden, but should the code crash or something else go wrong, you can quickly end up with a ruined battery.  For this reason an off the shelf solution was preferred.  After a bit of research, the LTC4xxx looked to be a good option.  Expensive, but fairly straightforward to implement.

One mistake I would discover later on was omitting any voltage regulator from the MCU supply.  This was not an obvious mistake as such, as all components should tolerate up to 3.6V on the core, and for two NiMH cells, that is comfortably beyond what you should charge them at, but what I found out very late on is that at startup, the voltage can exceed this and fry the MCU.  As a result I decided to leave off the charging circuitry... a clamping zener would have probably made things safe but they need something to work against... which in turn would require a polyfuse... that equals more voltage drop, more leakage current... the desire was to be able to connect the battery directly to the VMCU and use the naturally low consumption of the MCU to be the main draw beyond the battery itself.

The intended size for the unit was based around the dimensions of the EPD, which is roughly a 2.7" display... while the unit could be thick, I didn't want the batteries to completely cover the rear panel as that could make it difficult to access headers, ports, etc... it also adds more mechanical strain... as a fridge magnet, I wanted to attach the device magnetically and this is easier with a lighter weight.  For this reason, I chose 2xAAA cells over 2xAA, which would be able to have roughly 3x the capacity.  In any a case, a battery life exceeding a month was considered plenty, though it will be almost completely down to how many display updates and BTLE negotiations are done...

Monday, 26 October 2015

Prototyping the notey

At this point I had a reasonable idea what hardware was going to be involved, so could try testing some of it out...

The screen

Getting the e-ink working was very easy with an Arduino, but proved to be more difficult with the EFM32... there was some source code available, but it was for an older version of the display which was both quite incompatible with the newer B13 release, and also used a comical amount of RAM as relied on a framebuffer and a graphics library for rendering text.

Even though I was developing on the LG, I wanted to be able to target a regular Gecko with only 16KB of RAM and 128KB of Flash, and no USB capability... this was a much cheaper part (about half the price) which would greatly reduce the BOM.  It also can only run up to 32MHz but in theory that should have been okay.  Makes things trickier with the screen, though.


So what I did was take the Arduino library that worked and ported it to the EFM32... as an AVR has virtually no memory, this slashed the memory requirements down to a line buffer.  The SPI was a bit fiddly but eventually got it working.

A note on the e-ink screens... these are really closer to an analogue device than any other traditional digital screen... in order to get a clear picture it is required to write first in a particular pattern to clear any ghosting from the previous image, and then depending upon temperature, draw the image a particular number of times with suitable delays.

If you just try to draw an image once, you will still be able to see what was previously written in the same space, just a bit faded (this one has been stopped mid-write for the new frame, which causes an interesting bleed effect...).


 The thing is, the longer you spend updating the screen, the more power you are consuming... and there is also a balancing act between how fast you run the processor (which will affect the SPI transfer speed) and the total power consumed to update the screen.  Looking at power measurements, it seemed that running the processor flat out was most efficient in terms of total power consumption for a screen update... then sending it back to sleep as soon as possible.

To get a really clear image may take 4 redraws, but a perfectly readable image is normally possible in 2, with some degradation.  This is both quicker from a UI perspective and also consumes much less power, but eventually the ghosting of previous images will reduce the contract to the point when a full  clear is necessary.  This generally involves an inverse image of what was previously drawn, and then alterating solid black and white... the Pervasive Displays datasheet has more information.

Something that isn't covered in great detail is the endian-ness... I found it was necessary to flip about bytes but that was quite easy to do as a lookup table.



One note about the ghosting... some e-ink displays have greyscale capability... the Pervasive unit is not designed for this, but in theory you could probably get a few shades out by carefully understanding the persistence of the image and how it varies with temperature.  I decided this was beyond the scope of the favours as was up against it in terms of time!

The Bluetooth

I bought another module off the shelf to help with development... first, the Adafruit nRF9001 unit...


It is clearly targeted towards the Arduino crowd as has a 3V regulator and level translation... neither is desirable in the Notey but for a prototype it is fine.  The example code for the Adafruit module talks to the Bluetooth module via. ACI, which is a simple high level interface.  I was able to take the essence of the Adafruit code onto the EFM32, translating on the way into straight C, and replace the ACI section with the EFM32 port.

I've had a few issues with dropped connections, but generally the nRF8001 is a plug and play solution for sending messages across BTLE, and importantly, has a free application available for both Android and Apple devices.
In terms of supporting BTLE, Apple were in early as an iPhone 4S or later is all that is required... Android is rather more fragmented.... v4.4 is required for stable support.  I picked up second hand an iPhone 4S and also a Moto G for testing.  Both are relatively inexpensive now, particularly, the G which had a cracked front glass, but still perfectly functional.

Software

While there were a lot of challenges ahead just to get the hardware finished, I needed to at least make a start on the software, so worked on some font rendering.  It had been a while since I'd had to do any font stuff, but decided to go slightly more advanced than I had with the engagement ring and splash out with.... wait for it... proportional fonts.  No, not exactly earth shattering but means the fonts wouldn't look silly.  I did consider custom kerning as well but that was really a "nice to have" rather than an essential item.

A bit squashed at the first attempt, but it works...


I found a helpful .NET application which would convert TrueType fonts into bitmaps along with size and proportioning information.  These were then brought into the main code as const char arrays.

I brought in three fonts... a 12pt, a 24pt, and a 32pt.  In hindsight, the 24pt is too big, as it only gave an extra row of text, but it seemed like a good idea at the time!

At this point I had some basic BTLE and display test applications working, it was time to get cracking on the real hardware...