Yet another: the 96Boards[1] DragonBoard 410c[2][3] from Qualcomm has a Snapdragon 410 quad-core Cortex-A53 with a good amount of connectivity and IO interfaces (as per the 96Boards spec) and an Adreno 306 GPU which supports the open source Freedreno[4] GPU drivers (if you have a binary blob allergy). That said, this thing has a whole different performance tier than the TX1, but if you wanted to start poking at 64-bit ARM, this thing might be worth a look, especially with the comparatively low price.
That really frustrates me. I'm running a Dreamplug [1] as one of the main boxes on my network. The second USB port has died and it's really starting to show its age.
The Gigabyte board would give me enough SATA ports and grunt to run a decent NAS - I think it'd give the C2750 from Intel a run for its money, power wise.
Your linked article has a comment suggesting the Gigabyte board retails for 987 Euros. Jeebus.
It's dirt cheap ($10-20), easy to load linux on [1], and it can be a simple file server, it can run minidlna, google cloud print [2], all kinds of stuff.
800MHz is not a screamer, but most simple I/O tasks like A/V streaming and printing are a walk in the park.
Funny you should mention that. I've got one on the bench from a failed u-boot upgrade (worked after first reboot, then never again). I have the right tools to fix it, I'm just wondering whether it's worth the time or effort. They're $40-50AUD here as they're not sold locally (?)
It has a VGA port and what seems to be a verrry modest video system. However, it also has two PCIe3 ×16s, so presumably you can use your own GPU, if the software support is there. There doesn't seem to be an integrated GPU in the AMD dev kit http://www.amd.com/en-us/innovations/software-technologies/s... either, so presumably there is Linux ARM PCI GPU support out there, unless everyone is just SSHing in?
The price is quite a disappointment particularly when comparing to the Shield pricing (and how long NVidia has taken to present the kit - the X1 was introduced now almost a year ago / Shield on the market since early this year).
Let's see in the next days how performance of the board stacks up once the embargo on publishing test results is over.
Remember that the 1TFlops published is FP16 not FP32.
I think with the dev kit you're mostly paying for the motherboard-type thing and all the software.
The module itself doesn't seem that expensive, given that you're getting a top-notch ARM SoC. Personally I'd be interested in a group buy when it gets released (early next year, for $299 in 1k quantities).
The only Cortex-A72 chip that I know of is the MediaTek MT8173, and the GPU on that is nowhere close to a Tegra X1. So it's not like this is a previous-generation chip, despite being close to a year old.
Also remember that NVIDIA loves saying that this is as fast as the ASCI RED Supercomputer from 2000... but that was FP64 1TFLOP machine. This NVIDIA chip gets a total of 16 GFLOPs of FP64... a whopping 1/64 the power of the ASCI computer they try to compare it to.
I'm disappointed too - the Jetson TK1 was marketed at $192, so this is more than triple the price. Dev kit or no, their costs didn't increase that much.
Unfortunately they're pretty much the only game in town for this kind of hardware. There's a few alternatives (Parallella, FPGAs, etc) but nothing with the kind of ecosystem that CUDA provides.
Very impressive, but it feels expensive to me. I'm cheap though.
The linked article was pretty light on the detail, and weirdly written. It never states how many cores the CPU has; I had to check Nvidia's page (http://www.nvidia.com/object/jetson-tx1-dev-kit.html) to find out: it's a quad-core so four cores.
Some pretty nice low-level embedded-style I/O on there too (GPIOs, I2C, I2S, SPI, TTL UART).
It's basically a stock Tegra X1 SoC built out into a single-board computer. so it's best to just look and see what has been previously written about the X1.
As such, I don't understand the embargo on performance measurements - barring any major surprises, I think Tegra X1's performance is pretty much known at this point.
The X1 has actually got 4xA53 and 4xA57 cores in a big.LITTLE style arrangement.
Is this specific to Nvidia chips? Because that's not how it works on big.LITTLE SoCs from other manufacturers. In those you can use all cores at the same time if you want to.
Yeah, I'm really hoping someone offers carrier boards much like SparkFun did with their blocks[1] for the Intel Edison which passed the Hirose DF40 connector through to allow stacking.
Yes it is: embedded is a use category, not a performance specification. It's certainly not a "desktop" or "consumer" product as it doesn't even have a case. It's designed to be embedded within another product or system.
This is exactly what I need for my deep learning research. Iv been abusing my raspberry pi 2 with heavy neuro nets. Shame about the prise though. I will wait for the price to drop(hopefully).
Dunno, depending on what you're doing the Shield TV costs less and does about the same. People are running Ubuntu on these today, and you get the video, XHCI USB, GigE ethernet and WiFi working fine. Of course having a more "normal" platform is nice, but does a UART really cost 300 dollars more? (The ShieldTV has no physical serial port, at least not one I found anywhere).
nVidia...are you listening? Uncripple your firmware so booting custom images is not a song-and-dance (you broke it in 1.4!) and at least TELL us where the UART pads are on the motherboard. If you're really cool put together an "official" Ubuntu image that runs on the TX1 and the Shield.
Wish there is an affordable ARM SOC board for building DIY Spark clusters, which has a 64bit CPU, 8 cores, gigabit ethernet, SATA, USB3, and 4GB memory (8GB memory would be even better), and under $100.
Spec-wise, Odroid xu4 [1] from Hardkernel is very close to meet this requirement, though still lacking in cpu and memory(only 32bit and 2GB memory for xu4).
I would use it for personal projects at home. Just for fun.
Though I would envision there might be a niche market for affordable Spark clusters as appliances, say, one 1U box that contains 20 boards with total 160cores, 160GB memory, etc and only 200watts power consumption.
I'm curious: How are you supposed to use their card-sized module outside of the developer kit? Is the 400-pin connector something that's standardized?
Also would you need to add your own cooling here? The developer board looks like it has a beefy fan-based cooler whereas I don't see one on the module.
The SoC (card sized) has everything you'd need, not just the connectors to the several I/O and networking interfaces you may need. You will hook in connectors are you like with the 400-pin interface on the SoC and get going.
I am looking for something that can replace RPi2/Edison in my humanoid robot for real-time computer vision. Is Jetson TX1 a good candidate? Is it a drop-in replacement with a similar power envelope as RPi2/Edison, i.e. can be powered by a small LiPo battery?
For sure:
"Jetson TX1 draws as little as 1 watt of power or lower while idle, around 8-10 watts under typical CUDA load, and up to 15 watts TDP when the module is fully utilized, for example during gameplay and the most demanding vision routines."[1]
Anecdotally, the RPi2 has been measured[2] around 4-6 watts under load and the Edison was measured[3] around 1 watt.
If Denver's JIT happens to do well the device flies, but if you're not a benchmark or the benchmark is too large to be easily JIT'ed, then the performance falls off a cliff.
I don't think those are very reliable benchmarks because they throw the entire 64/32-bit browser engine and Android stack (which at the time was the very first spin of Android 5) into the mix.
Then there's the mix of clockspeeds and process tech.
Note that even on those it wins a lot of benchmarks. So I see little evidence Denver sucked.
I am trying to come up with a target market for these boards at these prices.
The development board could $500 or could be $1500 that is understandable, the big question is where is the market for the actual devices with the $300 board inside.
Some sort of gaming kiosks or maybe some sort of industrial use?
Most embedded users do not need that sort of GPU performance.
The people doing GPU computing would be using desktop hardware
Keep in mind the TX1 is just an early version trying to encourage adoption. The same chip is shipping in $200 tablets, so obviously the cpu can be pretty cheap.
Seems well within the cost of say your average pro/enthusiast level quad rotor if it could enable dynamic path finding, obstacle avoidance, and following a target.
Current systems can't for instance follow a bicyclist through a forest without hitting bushes/trees.
Current systems can't fly indoor automatically and avoid furniture, people, walls, etc.
The justification for a chip like the X1 is that it has enough CPU/GPU power for realtime vision type applications.