Nearline is Google’s cloud storage service for data archiving, online backup, and disaster recovery. Nearline is loosely comparable to Amazon’s Glacier. Both are pitched as backup or archiving storage. Both cost 1 cent per gigabyte per month but data retrieval times are vastly different. Retrieving your archives from Amazon takes several hours. Getting data from Nearline storage takes around 3 seconds.
With such a dramatic advantage in performance, why isn’t there a stampede of uptake for Nearline? Here’s 4 possible reasons.
Not enough people know about it
Nearline went to General Availability on 23 June this year. There hasn’t been a lot of time for word to get around. I’ve mentioned Nearline to over a dozen senior technology executives in the last few months and only one, Craig Fulton, Telstra’s Head of Cloud Engineering, responded knowledgeably about Nearline. So there is a time factor at play.
There is also a publicity factor. Surprisingly for a business with the brand strength of Google, there hasn’t been a lot of media coverage of Nearline. A Google News search for the period since Nearline went to General Availability, returned 20 times more results for Amazon Glacier than for Google Nearline.
Data Sovereignty Concerns
Australian business has a strange fixation with data sovereignty that is as often based on perceived risk more than it is based on actual risk. Adoption of Google Cloud in Australia has undoubtedly been constrained by the lack of local data centres and the associated concerns about data sovereignty, privacy and latency.
Google has a long history of being secretive about its technical architecture. That is good business practice but it is a barrier to adoption of Google Cloud. It leaves accountable executives with a data governance concern and Australian executives are generally more conservative relative to their peers overseas.
Storage is cheap; transfer is expensive
Amazon Glacier costs 1 cent per gigabyte per month to store data but anywhere from 5 cents to 9 cents per gigabyte to move data from Glacier out to the Internet. There is a convoluted bulk data export option but that would still cost 3 cents per gigabyte and has a logistical barrier to adoption. So existing Glacier customers are unlikely to be transferring their data to Nearline even with Google’s offer of six months of free storage for data transferred from competitors.
For enterprises with on-premise data storage, Google’s Offline Media Import / Export service is not available in Australia. Potentially huge data sets will need to be transferred ‘over-the-wire’. With a lot of DSL plans topping out at 100 megabits per second uploading a single terabyte of data would take 100 days. Bandwidth costs are still meaningful to most technology managers. It’s another possible reason delaying uptake of Nearline.
Recovery time is less important for archival storage
By its very nature, backup and archival storage is infrequently accessed. It is not mission critical data. This type of data is often stored more for regulatory than for operational reasons. So on the surface it may seem obvious that 3 second retrieval is far superior to several hours but in reality, for the way this type of storage has historically been used, it doesn’t make a lot of difference to the everyday running of business.
To some extent this is a legacy of habit. Storage that takes several hours will only be used in certain ways. Storage that is cheap and has a 3-second latency will have use cases that clever developers will eventually discover. Nearline also improves on Glacier by using the same set of APIs & client libraries to access data on Nearline as for Google Cloud Storage. Much like universities use a lot of spot pricing on Amazon to save cost in processing large data sets, Nearline will surely find a new functional niche.