[time-nuts] Follow-up question re: microcontroller families

Attila Kinali attila at kinali.ch
Sat May 25 17:42:19 EDT 2013


On Sat, 25 May 2013 16:09:11 -0400
"Charles P. Steinmetz" <charles_steinmetz at lavabit.com> wrote:

> On another thread, Bob wrote:
> 
> >If the objective is to complete a very simple, low powered project 
> >and be done with it, go with the Arduino. If the objective is to 
> >learn an empire, be very careful about which empire you pick. The 
> >ARM boys are quickly gobbling up a lot of territory that once was 
> >populated by a number of competing CPU's. Learning this stuff, and 
> >getting good at it is a significant investment of time.
> 
> I'm starting a new thread because I don't want to hijack the first 
> one, which I'm hoping will continue to provide useful information 
> about the broad continuum of available devices, from the "easy enough 
> for a child to assemble and program" to the "need to learn machine language."
> 
> My question here is more pointed: If one is going to learn a new 
> system today for timing and other measurement/control projects, which 
> "empire" is likely the best one to choose?
[...]
> Some of the more systemic (less application-oriented) factors would 
> be, which system is more versatile?  Which has the most useful PC 
> cards (or development kits) available that do not require the user to 
> start with a bare chip?  Which is likely to be around and supported 
> longer?  Which is easier to program?  For which is one likely to find 
> more programs to study and pirate, more libraries, etc.?  Which is 
> easier to outfit with removable memory (USB drives, memory cards, 
> etc.)?  Which has better and faster ADCs and DACs?  I'm sure there 
> are lots of other factors worth considering, as well.

You are asking a difficult question. And one for which the answer
changes over time... a lot!

I would ask the question slightly different: What is your limiting
factor? Time or money?

Most hobby projects will be somewhere inbetween time and money limited.
If you know which one limits you more, you have a better chance to
choose the right uC family.

Given that you know what your uC should be able to do and you know the
technically limiting factors (power consumption, computation power,
special peripherals, interfaces to the outside world, space) and you
have an idea what the uC landscape looks like out there, you can choose
one based on how much it costs or how much work it involves to get
it up and running.

Please do not forget that designing a uC board, or even using an evaluation
board, takes the least of our time. A lot more time is spend writing
the software. I.e. when selecting a uC you should do a quick search
for libraries and/or RTOS for the families you consider. Having a good
library that you can rely on to do the ground work (like controlling
a serial interface, or doing I2C transfers) can save you a lot of time.
Oh, and do yourself a favor and have a look how the code of the library
looks like, especially if it is provided by the chip manufacturer himself.
Most of these are very badly written and are more work to use than writing
it yourself from scratch.

Good documentation, ie fully available datasheets, with lots of explenations
and diagrams, are a must. You don't want to reverse engineer what the
designers did to write your ADC driver. A good example how to do it is
IMHO Ti. Their datasheets are as complete as i can imagine them to be
and they provide with each chip an extensive and embarassingly large
errata. You do want this. Knowing the bugs of your uC is key to everything.
If a manufacturer doesn't have an errata, look for someone else. Chips
have bugs, all of them. An example how not to do it, is Atmel.
Altough their datasheets are not too bad (sometimes confusing, sometimes
lack in detail, sometimes just damn wrong, but generally ok), they have
very little erratas, if at all. And when you report to them that their chip
has an undocumented bug, they just ignore you.

Using a slightly larger uC than necessary will also shorten software
development time, as it allows you to "waste" resources by taking
shortcuts in software. Ie i would rather go for a 32bit uC for a hobby
project with lots of internal Flash and RAM than a tinsy 16bit, so
i dont have to deal with limited memory.

Also, if gcc or a gcc derivative supports the uC this is a _BIG_ plus.
gcc might not be the best compiler out there, but it beats the heck out
of most of the comercial ones. Not to mention that some of those tend
to miscompile your code in mysterious ways, if you switch from one
license to another (as i have recently experienced with IAR).

That said, i wouldnt fixate myself on one or the other uC family at all.
Once you have seen one of them, you have seen all of them. These days,
the uC work all in quite similar ways. Mostly RISC or RISC-like architectures,
neatly hidden behind a compiler. What you usally have to deal with are the
peripherals, which tend to have only a handfull ways how they can be
done (I2C, SPI, ADCs etc aren't complex enough to varant tottaly different
interfaces). Maybe you have to deal with the memory layout, but
most of the chips are using a unified code/data space (aka von Neuman
architecture). I have not seen any recent 16 or 32bit uC that uses
a Harvard architecture (i exclude the 8bits here, because i didn't
have a look at 8bit uCs in ages).

Last but not least: There is an advantage in using more popular
chips (AVR, Arm Cortex-M). You will find more knowhow and help on the
net for the toolchain or other problems. You will find more ready made
libraries and code collections out there. And you will have a gcc version
with less bugs.

HTH

				Attila Kinali

-- 
The people on 4chan are like brilliant psychologists
who also happen to be insane and gross.
		-- unknown


More information about the time-nuts mailing list