Bit Mountain

Discussions around miscellaneous technologies and projects for the general membership.
Shane Hathaway-p40
New Member
Posts: 10
Joined: Sun Mar 04, 2007 1:31 pm

Google

Postby Shane Hathaway-p40 » Mon Mar 05, 2007 8:21 pm

Michael wrote:I have no idea what you should do. However, within the past six months I have read several articles about Google and Microsoft buidling huge data centers. They must have the same issues. I wonder how they are solving the problem?


Google is solving this with simple replication across ordinary servers. Bit Mountain can implement simple replication as well, but I believe error correction is a better solution for archiving data.

russellhltn
Community Administrator
Posts: 30710
Joined: Sat Jan 20, 2007 2:53 pm
Location: U.S.

Postby russellhltn » Mon Mar 05, 2007 9:19 pm

Shane Hathaway wrote:you can be quite confident the data will remain perfect for 1000 years.


Are you saying that it will only fail once in 1000 years or the risk of failure is very low even over the course of 1000 years? :D


Another question - are you keeping stats of the equipment that fails? I assume that you're expecting that failed components are changed out within a certain time frame, and that drives will fail at a certain rate. However, drive failure isn't random. As the collection of drives approach end of life, the number of failures in a unit of time will go up. Possibly quite dramatically. (I'm not sure if anyone gives the standard deviation to the MTBF.) How will you monitor the situation to warn that the statical probability of failures is reaching an unacceptable risk and that either parts must be changed more quickly or that aging components need to be proactively changed?

As far as I'm concerned, the MTBF is a educated guess based on accelerated testing and reality might well be different. The question is how much are you betting the farm on the manufacture supplied MTBF number?

Shane Hathaway-p40
New Member
Posts: 10
Joined: Sun Mar 04, 2007 1:31 pm

Postby Shane Hathaway-p40 » Tue Mar 06, 2007 3:40 am

RussellHltn wrote:Are you saying that it will only fail once in 1000 years or the risk of failure is very low even over the course of 1000 years? :D


The latter. :D

RussellHltn wrote:Another question - are you keeping stats of the equipment that fails? I assume that you're expecting that failed components are changed out within a certain time frame, and that drives will fail at a certain rate.


I've seen how often failed components need to be replaced in a big SAN. Emergencies seem to happen often. But Bit Mountain isn't like that; it doesn't care if you leave failed components in the system. What's important is always having spare space. As long as Bit Mountain has enough space to recover, you can postpone drive replacement as long as you want. Even lack of recovery space is not really an emergency, though it's an integrity risk.

RussellHltn wrote:However, drive failure isn't random. As the collection of drives approach end of life, the number of failures in a unit of time will go up. Possibly quite dramatically. (I'm not sure if anyone gives the standard deviation to the MTBF.) How will you monitor the situation to warn that the statical probability of failures is reaching an unacceptable risk and that either parts must be changed more quickly or that aging components need to be proactively changed?

As far as I'm concerned, the MTBF is a educated guess based on accelerated testing and reality might well be different. The question is how much are you betting the farm on the manufacture supplied MTBF number?


I don't yet have a system for tracking expected media lifetime, since it seems to be quite unpredictable. For example, I have an 11 year old desktop hard drive that has gone through periods of 24/7 operation yet still works great. So currently I'd rather base the calculations on pessimistic estimates, then just let the storage devices fail over time. If the failure rate turns out higher than expected, then we'll put the actual failure rate into the spreadsheet and it will tell us whether we need to increase the number of error correction segments to compensate. This is certainly an area where we need more experience to make a good judgment.

donkent-p40
New Member
Posts: 1
Joined: Sat Jun 09, 2007 9:52 pm

Open Sourcing Bit Mountain

Postby donkent-p40 » Sat Jun 09, 2007 10:24 pm

I would also be interested in seeing bit mountain open sourced.

I currently have a large personal data vault spread across two raid5 servers that are nearing capacity. Everything is dumped semi-randomly in directories, and I was looking into setting up a new monolithic storage system where I can store files with associated metadata in a reasonably fault tolerant way. The best (and cheapest) method publicly available seems to be Mogile, which would allow me to store files in an easily expandable archive, and I can add my own metadata to the mogile database. However, if you store things at a minimum replica count of X and happen to lose X disks before being able to replace them, there is a significant chance you've lost some files. Since I'm doing this as a home user, I'm trying to do it as cheaply as possible. A replica count of 2 is pretty much my maximum. Between my family and my job, there are times when I won't be able to repair any failures for weeks at a time.

This is why Bit Mountain is extremely interesting to me. If I understand your paper correctly, with 10 data segments and 5 error correction segments, you can lose any 5 of those 15 segments, and still have lost no data. That is simply fantastic and I'm interested to see how it works. Not only is the system several orders of magnitude more fault tolerant than mogile, it will only take 75% of the disk space that mogile would.

Unfortunately, since you've been talking about this for over a year, I'm betting bit mountain isn't going to be released anytime soon. Since I'm hoping to setup my new system in the next two weeks, I'll most likely end up going with mogile.

Thanks, Don

daryl1
Member
Posts: 73
Joined: Tue May 15, 2007 5:04 pm
Location: Central California

Cutting Edge Archiving Storage Not Released Yet To Public

Postby daryl1 » Sun Jun 10, 2007 9:59 pm

Did you know there is a cutting edge storage device called Holographic Versatile Disc.

Still in research mode projected to hold 3.9 TB.

Current optical storage saves one bit per pulse. HVD will improve this efficiency with capabilities of around 60,000 bits per pulse.

The U.S. Library of Congress not including images from the books could be stored on 6 discs. These discs can hold 4,600-11,900 hours of video and 26.5 years of uninterrupted stereo audio.

This is not the only competing technology there are other high-capacity optical storage medias in the works.

In closing this technology is not initially for the common consumer, but for enterprises with very large storage needs. Readers will be around $15,000 and single disc will be around $120-180 prices expected to fall steadily.

This information was taken from http://en.wikipedia.org/wiki/Holographic_Versatile_Disc

I am not permoting or endorsing anything just reporting possible new cutting edge tech coming soon for big enterprises needed large storage needs.

Thought this was interesting and worth sharing.

User avatar
thedqs
Community Moderators
Posts: 1042
Joined: Wed Jan 24, 2007 8:53 am
Location: Redmond, WA
Contact:

Postby thedqs » Mon Jun 11, 2007 6:32 pm

Yes I remember reading and studying up on this about 1 1/2 years ago when it was in research and they had a small prototype model. Anyway through this time I have always mentioned to friends and acquaintances as the next jump in storage technology IF people adopt it. Similar to the protein cube http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/7218/19434/00897894.pdf which could store around 1 TB on a sugar cube size.
What you need it content that you can put on a device that uses an appreciable amount of room. For example, if I backed up all the computer in our house I would fill up about 700 GB. (This is a full disc image backup without compression) But since a HVD holds roughly 3 TB I would be wasting 2.3 TB if you had to burn the disc all-at-once.
Mostly I think these discs will go to businesses that need to backup their servers until personal information becomes large enough to occupy at least 75% of the disc (roughly 2.25 TB) and since Hard Drives can't even hold that much information yet (Largest I've seen was 1 TB External) it will take awhile for people to use this technology.

As a personal observation, I think that the gaming industry will be the first to use these drives since I am sure they can come up with a way to get HD Video and HD textures into their games to fill up 3 TB:D
- David

cannona-p40
Member
Posts: 79
Joined: Sat May 19, 2007 1:32 pm
Location: Iowa City, IA
Contact:

hash algorithm

Postby cannona-p40 » Wed Jul 18, 2007 1:16 pm

Shane Hathaway wrote:Good questions.

- The system is not very susceptible to data corruption, missing sectors, or hardware glitches in the storage nodes, since it periodically compares the data with an MD5 hash (any other hash is also possible) and automatically falls back to other storage nodes if the primary node does not reply within a configurable time limit. The only parts administrators should worry about are the network and the central database, although it's possible to build a redundant network and the database is replicated asynchronously.


FYI, cryptographers are recommending that new systems no longer use md5. I would recommend SHA1 as a minimum, or, even better, SHA512. Even with a 128 bit hash, the odds of a colision are extremly small, but since you are designing a system that is supposed to last for decades, you may as well use the best technology you can.

Just a thought.

Aaron

User avatar
thedqs
Community Moderators
Posts: 1042
Joined: Wed Jan 24, 2007 8:53 am
Location: Redmond, WA
Contact:

Postby thedqs » Wed Jul 18, 2007 3:28 pm

SHA1 has been found this year to have collisions and is being phased out for SHA256 at minimum.
- David

russellhltn
Community Administrator
Posts: 30710
Joined: Sat Jan 20, 2007 2:53 pm
Location: U.S.

Postby russellhltn » Wed Jul 18, 2007 5:41 pm

I ran across an interesting article today: Why RAID 5 stops working in 2009. The premise of the article is that as drives get bigger, one is more likely to encounter/cause successive drive failures as the system attempts to rebuild from the first disk failure.

What impact, if any, does this pose to Bit Mountain?

A follow-up question - given the vast amount of data, it stands to reason that some parts of the data may never be accessed in years. How would one know if the data is starting to go corrupt unless all the data is accessed frequently enough to detect when that part of the data is starting to go? Or to put it in other words, how long could a drive failure(s) go undetected? If they are not detected and corrected soon enough, one could end up going beyond what the system can recover from.

sbohanan-p40
New Member
Posts: 1
Joined: Sun Jan 28, 2007 7:42 pm
Location: Fairbanks, Alaska, USA

Postby sbohanan-p40 » Sun Jul 27, 2008 11:17 pm

Correct me if I'm wrong. If I read things correctly, you can set Bit Mountain to verify integrity on a regular basis. That way it will verify truly "archived" files even when they are not accessed to fix any inconsistencies that may have cropped up. I liked the article, Russell. One of my instructors a couple of years ago actually did the math for us, and I've actually experienced what the author talks about. I may be working on a project that might require Bit Mountain or a similar solution here in the future. I like the concept. I'm going to definitely keep my eye on this project.


Return to “Other Member Technologies”

Who is online

Users browsing this forum: No registered users and 1 guest