This sounds really interesting, but does anyone know how useful it is in practice? 90 GFLOPS doesn't seem that high compared to e.g. high end i7s, though I imagine there's a lot of variability in how that's actually calculated.
Also, if it is as fast as claimed, surely 1GB of memory would be a large limiting factor?
If I remember correctly, my Xeon E7-8870 server pulled at most 100 GFLOPS double precision. I think high end i7s max out at about 180 GFLOPS single precision. Considering relative costs (and the scale the 64 core can reach) this is really really cheap per GFLOP, both in power and cost. (E7-8870 is 100W and like $4000).
...but I can buy an off the shelf $200 GPU (e.g. Nvidia 660) with stable tools that will work pretty good with highly parallel tasks, and get ~2000 GFLOPS according to [1].
Or is there workloads where the parallella actually would make sense?
Power requirements are low, but development environment is probably rough, and development costs (regardless of stack) are going to be order of magnitudes higher than your electricity bill anyway for most projects.
A GPU will always beat this for tasks with many data streams and few instruction streams. This is for workloads with many instruction streams, which GPU's do really badly on.
And it doesn't much for electricity costs to be important - electricity already makes up a substantial percentage of hosting costs for colocation.
Considering the Z7000 series FPGA, yes there are many use cases where this is much better than any Cpu or GPU. it's a really uncommon use case though, and the best use for this board is probably as a developer board for ASICs with ARM IP cores.
The benefit of the FPGA, though is that you can use it to add whatever peripherals or even coprocessors you want for your specialized application.
It would be awesome to see a 10gbit fiber optic Phy on it.
The reply above explained it better. Intel CPUs are the equivalent to the ARM CPUs in the GPU solution and the big benefit of the FPGA is feeding data into the 64 core chip without being constrained to PCIe. (The FPGA has DSP cores too so it'll be able to feed processed data from sensors faster than an equiv Intel would).
Speed is not the only factor, there's also power consumption and cost. I would hazard a guess that the epiphany co-processor is a lot less power-hungry than a high-end i7.
Looks like an excellent first step to me. I've been waiting for hardware along these lines for years, and am glad to see some progress is finally being made.
But when will have programming languages that can accommodate them? Haskell gives some hope, and Go, but we're no where near where we need to be for mainstream developers to take advantage of so many cores.
There is already a great deal of language support, and having boards like this will certainly help popularize the parallel programming skill-set. Since it looks like there is an integrated FPGA, I'd like to see Dynamic Method Migration[1] make some progress too.
Given the libraries, mainstream developers will probably have no problem taking advantage of this class of hardware, using whatever language they might deem appropriate. Of course when I see this board, I think more about nonlinear PDE than web apps... so, how are you defining mainstream?
It does? I tried copying text on the page as well as the URL and didn't notice anything. As far as I know, this isn't possible in pure JS; you need Flash. It's a security issue.
I don't have flash installed. You can't cause a copy with Javascript but you have control over what gets copied - this is important for making custom widgety things but it is being used here for evil. See https://developer.mozilla.org/en-US/docs/Web/API/element.onc...
Although, I changed 127.0.0.1 to 0.0.0.0 for a marginal gain in speed (also I test a local server sometimes and get served my test page instead of ads). And took out some services I actually use.
Slightly off topic, but when I saw the power distribution bus with nine different outputs for this tiny little thing, I contemplated that we've come a long way from when I was a kid and everything ran off a single 5 volt ttl bus. It seems the more people say analog is dead, and thats all I've heard my entire life, the more you actually need analog EEs.
Could you elaborate on that? High-speed hardware interests the hell out of me, but I don't have any hands-on experience there. Any books or exercises you could recommend?
Maybe the mid-level ham radio microwave experimenters? Obviously Microwave Update conference proceedings articles about 47 GHz amplifier design and 112 GHz subharmonic mixers are not relevant on the high end for being way too high for mere digital people, and the 80 meter (aka 3.5 MHz) band guys are a bit on the low end for what you're trying to do...
The 20 year old ARRL microwave experimenters manual from the local public library isn't so bad of an introduction to the issues and technology. Yes its old. But, yes, SMA connectors still resonate above 18 GHz, or yes, people still build stuff with microstrip, yes a smith chart still works like a smith chart, even if you view it on a computer screen now.
At a much lower frequency level but still relevant, the late Bob Pease has a nice book "Troubleshooting analog electronics" although his monthly magazine articles were probably equally informative and are (were?) free. A reprint of his magazine columns would probably make a highly practical (as opposed to theoretical) analog engineering curriculum. A lot of troubleshooting issues that are annoyances at low frequencies are big issues at higher ones. Or the mental process of bypassing a power lead is about the same, even if the tech changes with frequency.
I think its interesting to read classic bitsavers manuals. How did Cray test those modules, anyway? Well read from the source. Layout densities have changed drastically, fundamental issues have not.
Unfortunately I learned EE as an apprentice with no books but I've heard High Speed Digital Design: A Handbook of Black Magic [1] is good.
Understanding why high speed is more like RF and analog electronics has a lot to do with impedance, which is the "resistance" of a material to a change in current at a specific frequency. Ohm's law is impedance at 0hz (DC current) but it gets a lot more complicated when you have a 133 Mhz bus because now the signal is changing fast enough that a lot of interesting effects start to pop up. For example, the capacitance of the PCB has a significant effect on rise times, your trace lengths have to be within 1/20th of the wavelength of the signal of each other or the bits might arrive at different times, you have to start worrying about other Mhz+ noise from the power supply based solely on the location of traces (hence you have to place filter/bypass caps more carefully), you have to worry about mismatched impedances (the "resistance" @ 133 mhz is different between the trace and a pin or something) or else you'll have a lot of energy "reflected" back at the signal source, adding interference patterns, etc. You never really see any of this in hobbyist electronics unless you have a really long cable for Khz signals or are trying to use something like SPI at 100Mhz. Once you start dealing with high end ARM chips, DSPs, DDR2-3, PCI[e], gigabit ethernet, HDMI, these traces become very prevalent.
However the beauty of digital is that you don't really have to understand all of the analog that goes behind it. There are a lot of simple engineering rules that make it very difficult to mess up a high speed digital design (at least in my experience). If you have a tool like Altium, managing all the impedances, trace widths and lengths, differential pairs becomes a cake walk. If you know what an error looks like in an oscilloscope (whether its a rise time problem, reflection problem, etc) then you'll find it easy to work in the field. Then there's actual RF engineering, which is a whole other story (digital signals are rarely more than the mW range, RF engineering also goes beyond that into W-KW-MW range).
I find that hard to believe because I can't imagine teaching EE beyond the first year without running face first into impedance. I mean, any time you touch on RF (which is the really interesting part of analog anyway) you have to somehow talk about the difference between DC and AC, unless you talk about it in an exclusively academic way as if it's some EE relic that you'll never use. It might come down to the fact that everyone is taught Ohm's Law first, which for me seems a terrible way to teach EE because it forces an immutable, instantaneous relationship for your circuit (aka spooky action at a distance) whereas impedance has a meaning grounded in how the electrons interact with matter. I threw away everything I learned in EE/ohm's law when my mentor taught me impedance because the mathematics, units, and physical intuition finally fell into place.
The benefit of an apprenticeship is that you are usually taught in a project setting with tools that have this knowledge largely built in. All you have to know is the theoretical implications of each of your use cases (and there aren't many in say, consumer smartphone design, except for the antenna) and what the jargon is for your software. Altium has "matched lengths," "interactive differential pair routing," "impedance matching," etc. which you can use with specific engineering calculators ( http://www.mantaro.com/resources/impedance_calculator.htm is my favorite) and the software will do the rest. Literally, point, click, and drag.
I should probably clarify. What I meant to say is that they don't really teach high-frequency analog design in school. We covered the theoretical aspects, like impedance matching and reflection, but there's a long way between knowing those things and being able to reason with them when designing a chip.
Well if you understood the fundamentals it's actually not that far to being able to design a board. Designing a chip is a whole other matter but can also be quite easy if you are comfortable with FPGAs and using IP cores. I'd say there's about a 0% chance of you having your own fab so the ASIC firm you hire will usually help you synthesize the high level language design to silicon (for a price).
Once you're comfortable with your routing tool, check out the following links. This is the material I use as a refresher when jumping back into high speed digital design:
I should mention that I'm a computer engineering major and not a EE major, so I haven't taken some of the more advanced physics classes like solid state physics or electromagnetics. Are there any resources you would recommend for getting more of the theoretical background on those topics?
I've tried to learn the physics behind EE but my brain has been shot too many times by primary through secondary education. As a CS major you can dive in depth into the PCI/DDR standards and they will teach you alot about the signals. Combine it with a application note on DDR routing and you'll get the big picture.
Very interesting. This is even less expensive than the Zedboard and it has the multi-core unit as well. I think the Zynq is pretty awesome but man, I don't think Xilinx could have made using it any more complicated if they tried!
Would this be a good fit for something like video processing? Could you use the GPIO pins to actually have a video input, and then send it out via HDMI?
Should be possible. Just attach some RCA jacks to the GPIO pins and use the FPGA to do some pre-processing before sending it to the CPU. Not sure what the performance would be like, though.
I don't think that would have had much of an effect on their fundraising. The boards design was not close to completion at the time, and the funds were actually used to build the prototypes that enabled them to get this far. They've been really transparent and open, but I would agree that would have perhaps saved themselves some work by letting the community find and troubleshoot bugs earlier.
They were at OHS talking about how they were going to open-source it when ready for production, and their kickstarter even said "All board design files will be provided as open source once the Parallella boards are released."
I have an old 65k+ passwd file that I have always wanted to get to 100% cracked paaswords (purely for sentimental reasons.) Unfortunately I bet that implementing DES is not a huge priority.
Including power considerations? This is an ancient 10+ year old passwd file that I spent a lot of time working on cracking for grad school. I got ~2/3 of the accounts. I figure that number includes most of the human generated passwords and a couple versions of the "randomly generated passwords." I have always been curious what the story was behind the remaining third. But I'm not curious enough to devote many resources to it.
I don't think they will. I'm not sure if the VHDL would contain any proprietary IP from them or others, but I spoke with their CEO at OHS last year, and he indicated that open-sourcing the processor its self was out of their reach. I didn't push further, but I could imagine there were IP issues involved in doing so.
Most certainly, the actual chip design will include a lot of 3rd party IP - I'm not sure they would want to create new transistor designs, etc. when those can easily be licensed from the fab company.
I don't understand electronics component pricing. The entry level board cost $99, but the cheapest Zynq-7 costs over $100 (Octopart)? Or maybe it's just Xilinx. How does Adapteva manage these prices?
Also, if it is as fast as claimed, surely 1GB of memory would be a large limiting factor?