I’m sure there are data science/center people that can appreciate this. For me all I’m thinking is how hot it runs and how much I wish soon 20TB SSDs would be priced like HDDs
Yeah but 15 GB/s is 120 gbit. Your storage nodes are going to need more than 2x800gbit if you want to take advantage of the bandwidth once you start putting more than 14 drives in. Also, those 14 drives probably won’t have more than 30M iops. Your typical 2U storage node is going to have something like 24 drives, so you’ll probably be bottlenecked by bandwidth or iops no matter if you put in 15GB/s drives or 7GB/s drives.
Maybe it makes sense these days, I haven’t seen any big storage servers myself, I’m usually working with cloud or lab environments.
If what you’re doing is database queries on large datasets, the network speed is not even close to the bottleneck unless you have a really dumbly partitioned cluster (in which case you need to fire your systems designer and your DBA).
There are more kinds of loads than just serving static data over a network.
I work in bioinformatics. The faster the hard drive the better! Some of my recent jobs were running some poorly optimized code and would turn 1tb of data into 10tb of output. So painful to run with 36 replicates.
Agreed. I’d happily settle for 1GB/s, maybe even less, if I could get the random seek times, power usage, durability, and density of SSDs without paying through the nose.
I’d be more than happy with 1GB/s drives for storage. I’d be happy with SATA3 SSD speeds. I’d be happy if they were still sized like a 2.5" drive. USB4 ports go up to 80Gb/s. I’d be happy with an external drive bay with each slot doing 1 GB/s
I’m sure there are data science/center people that can appreciate this. For me all I’m thinking is how hot it runs and how much I wish soon 20TB SSDs would be priced like HDDs
nah datacenters care more about capacity or iops, throughput is meaningless, since you’ll always be bottlenecked by network
Not necessarily if you run workloads within the datacenter? Surely that’s not that rare, even if they’re mostly for hosting web services.
Yeah but 15 GB/s is 120 gbit. Your storage nodes are going to need more than 2x800gbit if you want to take advantage of the bandwidth once you start putting more than 14 drives in. Also, those 14 drives probably won’t have more than 30M iops. Your typical 2U storage node is going to have something like 24 drives, so you’ll probably be bottlenecked by bandwidth or iops no matter if you put in 15GB/s drives or 7GB/s drives.
Maybe it makes sense these days, I haven’t seen any big storage servers myself, I’m usually working with cloud or lab environments.
If what you’re doing is database queries on large datasets, the network speed is not even close to the bottleneck unless you have a really dumbly partitioned cluster (in which case you need to fire your systems designer and your DBA).
There are more kinds of loads than just serving static data over a network.
Science stuff though …
I work in bioinformatics. The faster the hard drive the better! Some of my recent jobs were running some poorly optimized code and would turn 1tb of data into 10tb of output. So painful to run with 36 replicates.
Are you hiring ^^ ?
Love that kind if stuff.
A lot are moving through software defined networking which runs at RAM speeds.
But typically responsiveness is quite important in a virtualized environment.
InfiniBand could run theoretically at 2400gbps which is 300GB/s.
Agreed. I’d happily settle for 1GB/s, maybe even less, if I could get the random seek times, power usage, durability, and density of SSDs without paying through the nose.
I’d be more than happy with 1GB/s drives for storage. I’d be happy with SATA3 SSD speeds. I’d be happy if they were still sized like a 2.5" drive. USB4 ports go up to 80Gb/s. I’d be happy with an external drive bay with each slot doing 1 GB/s