There’s a rumor flying around that the world’s C: drive is almost full, meaning there’d be no more data storage space, possibly soon. The problem in itself is simple: The more we do on computers, the more data we need to store. In the personal realm that covers all the world’s cat videos, baby footage and Stargate fan fiction. From a business perspective that covers consumer analytics data, customer records and financial spreadsheets. All of it is amassing, and all of it needs somewhere to go. 

Mark Whitby, senior vice president of Branded Products Group at Seagate Technology, EMEA, told TechRadar that the major issue in the business world is that companies are using data analytics more than ever to try to understand their consumers. That means massive amounts of data stored, more than was previously needed ever before in the businesses realm. 

“Tomorrow’s competitive advantage may well be driven by the ability to quickly identify the right data, collect it, analyze it and act on it,” explained Whitby. “In order to amass this valuable digital repository, however, firms must first have enough storage capacity. And in order to drive all possible value from that data, it must also be stored in such a way as to be quick to access, efficient to manage and low-cost to maintain. Unfortunately, therein lies the rub.” 

The rub is current data centers aren’t equipped to handle that much data coming in, especially from the Internet of Things. The amount of smart products keeps amassing, meaning still more constant streams of data. Last year, Gartner (http://www.gartner.com/newsroom/id/2636073) predicted that 26 billion installed units will make up the Internet of Things by 2020. That’s a 30-fold increase since 2009. And that number doesn’t include PCs, tablets and phones. They’re talking about devices with connectivity in them like medical devices, factory sensors and infrastructure monitoring systems. That all equates to a massive amount of data that also needs to go somewhere. 

Whitby also stated that it’s getting harder, on the molecular level, to squeeze the same amount of data onto the same amount of space it currently occupies. We’re already looking at devices that can store a million times more data on the same space that they used to in the 1970s. But even that has a limit, and we’re getting close to that limit. Building more data centers and hard drives is a time-consuming, expensive process, one that takes longer than our ability to generate more data. 

None of this is to say we should just use computers less for the sake of data conservation. Data storage will simply need new and better ideas. A new technology will need to emerge, and they’re working on it.

The new wave of data storage

Currently, there are several ways to store exponentially more data on our devices and drives. One such technology is called RRAM (resistive random access memory), a type of computer memory that Whitby said could let us store ten to hundreds of thousands of times the data on our smartphones alone. 

Then there’s the new wave of DNA storage, and it’s actually as neat as it sounds. Digital data can be stored in a tiny strand of DNA. One gram of DNA could theoretically hold all the data for Facebook or Google. The research team out of ETH Zurich in Switzerland ran a test by storing the Swiss Federal Charter of 1291 and the “Archimedes Palimpsest,” a copy of an ancient Greek mathematics treatise, on some DNA. Both amounted to 83 kilobytes of data. The team estimated it will last 10,000 years, or 1 million years, if frozen.

Another option is called heat-assisted magnetic recording (HAMR), which uses lasers to heat high-stability media and then records it magnetically. That’s the technology Seagate, Whitby’s company, was investing in as of the beginning of this year. 

Whitby commented, “HAMR is expected to increase the limit of magnetic recording by more than a factor of 100 […]. To put this in perspective, a digital library of all books written in the world would be approximately 400 TB – meaning that in the very near future conceivably all such books could be stored on as few as 20 HAMR drives.” 

And back in 2013, they were already making a 360TB storage disk that could preserve data for more than 1 million years. Not only are we going to have a lot of data in the future, but we’ll be able to preserve the information we really want to keep. 

There are lists and lists of data storage technology that may change the way we store data, from Ethernet hard drives to helium-filled disks, all geared towards making sure we can store more and more data. 

So that begs the question: What does all this mean for data destruction?

Increased data storage and data destruction

As companies push to store more and more data, data destruction has never been more important. After all, more data on file means a potentially bigger breach, especially when the data pertains to consumer information and behavior. It’s hard to say what the data destruction policies will look like down the line when technology like DNA data storage may or may not be commonplace. But as companies push to store increasingly large amounts of data now, data storage will require a plan. 

The three most common ways to destroy data are degaussing (removing or reducing the magnetic field of a device), overwriting all data with gibberish data and the physical destruction of a device. As data storage leaps into the realm of increasing in space tens to hundreds of thousands of times, overwriting may become a long process, and depending on how new technology forms, degaussing could become an obsolete method. 

The most complete way to get rid of large amounts of very secure data is currently physical destruction. Shredding and reusing the elemental components of a machine is the most secure way to nuke our massive, seemingly infinite amounts of data. Perhaps, though, we will have new methods of data destruction to go with the new storage methods and more advanced technologies.