A Breakdown of AI Chip Companies
post by Tomás B. (Bjartur Tómas) · 2021-06-14T19:25:46.720Z · LW · GW · 4 commentsThis is a link post for https://geohot.github.io/blog/jekyll/update/2021/06/13/a-breakdown-of-ai-chip-companies.html
Contents
4 comments
4 comments
Comments sorted by top scores.
comment by Raemon · 2021-06-14T21:22:03.564Z · LW(p) · GW(p)
I think this post could use a summary of what your takeaways were here or why this is relevant to LW. (It does indeed seem relevant, but it's generally good practice to include that in linkposts so people can get a rough sense of an article via the hoverover)
comment by alexlyzhov · 2021-06-16T02:15:22.072Z · LW(p) · GW(p)
"I have heard that they get the details wrong though, and the fact that they [Groq] are still adversing their ResNet-50 performance (a 2015 era network) speaks to that."
I'm not sure I fully get this criticism: ResNet-50 is the most standard image recognition benchmark and unsurprisingly it's the only (?) architecture that NVIDIA lists in their benchmarking stats for image recognition as well: https://developer.nvidia.com/deep-learning-performance-training-inference.
comment by Oscilllator · 2021-06-16T01:37:11.737Z · LW(p) · GW(p)
You are of course aware that Xilinx has its own flavour of ML stuff that can be pushed onto its FPGA's. I believe it is mostly geared towards inference, but have you considered checking the plausibility of your 'as good as a 3090' estimate against the published performance numbers of the first party solutions?
Replies from: Bjartur Tómas↑ comment by Tomás B. (Bjartur Tómas) · 2021-06-16T04:05:38.383Z · LW(p) · GW(p)
I did not write this post. Just thought it was interesting/relevant for LessWrong.