You have probably heard about Moore's law driving the relentless improvement in computational power we've seen the last 40 years. Unfortunately, this "law" isn't about power or speed, but purely about the number of transistors able to be packed onto a chip. Speed has historically been a direct result of this as the clock speed was able to be driven up with the smaller device sizes.
While Moore's law is still happily churning along, and probably will for at least another decade or two, recently processor speeds haven't really been increasing very much. Instead we are now seeing multiple cores (copies of a whole processor) embedded in chips. Currently, that's mostly 2 and 4 core chips, but since this is now the easiest way to use all the extra transistors that can be packed into a chip, over the next 10 years we'll see many many cores on your desktop. Numbers like 32-128 I suggest.
Cool right? But there's a problem. Most of today's software only knows how to do one thing at a time, so if you had a 128 core machine today, it wouldn't feel any faster. We need to write more software that uses multiple cores. Its called parallel programming, and it's much harder than normal programming as it is prone to sudden and massive program failure as threads wamp each other's data. It's been suggested that 1% of programmers actually know anything about this.
At least I may be hold onto a job for the next ten years.
Playing with Buck Converters
7 months ago