Comment I'm really interested in personal injury attorneys (Score 1) 93
And mesothelioma. I hope my browser will show lots of those ads.
And mesothelioma. I hope my browser will show lots of those ads.
3n+1 problem.
I would happily support DRM that actually cared about customers' rights. I want the guarantee that, like physical media, DRM-protected content will be available in the far future. Blu-ray already fails this test, and I only purchase Blu-rays to strip the DRM and save a long-term format. I want the ability to gift, loan, or sell any media that I possess the rights to. I don't want to possess merely a ticket which grants me admittance to content for a limited time, under limited conditions, subject to the dissolution of whatever producer, licencor, or operator manages the DRM scheme.
Because piracy has absolutely no effect on 99% of customers I am fairly certain that what content producers/licencors truly fear is "casual piracy" and fair use like loans and libraries where market forces drive the resale cost of digital media down to its natural price in the free market.
It's perfectly natural to resist inferior DRM schemes by refusing to make them standard. If you want me to support an open DRM standard then it needs to be capability based with normal customers like you or me represented as first class owners of those capabilities and implement a durable scheme for transfer of those capabilities into the indefinite future.
For example, consider a ownership-based scheme where producers issue N digitally-signed capabilities to a particular copyrighted work and sell them to customers on an electronic marketplace. Bitcoin has proven that it's possible to maintain a globally consistent transaction ledger of ownership of individual tokens, and a much cheaper implementation could maintain ownership and facilitate programmatic transfer of capabilities to digital works (to support sales, gifts, and even temporary loans) because the marginal value of acquiring more than one capability to the same work is zero and so there will be little need to spend gigawatts of electricity maintaining the blockchain against adversaries. The copyrighted work doesn't even have to be encrypted. Just make standards-compliant devices/software require current ownership of a capability to use the work. Yes, this is an easily defeated scheme for pirates, but so is every other DRM scheme. At least this respects individual property rights, the first sale doctrine, fair use, and libraries for the vast majority of users.
In the best case scenario some idiot terrorist without a gun gets shot a couple times in the center of mass with no over-penetration before setting off a bomb. Virtually any other scenario doesn't require firearms to handle; terrorists on a plane are literally at arms-length to a hoard of people who hate terrorists. Terrorists taking overt action need room to maneuver or the ability to barricade themselves, or else the ability to instill overwhelming fear and inaction in everyone surrounding them. Taking over a plane with the threat of personal violence is pretty much the hardest thing a terrorist could accomplish at this point.
1.) all the photos I took of her seemed incredibly important at the time but are never looked at any more
Yeah, photos have a weird W-shaped utility; They get shared and looked at a lot when brand new. After 6 months to a year they sit in boxes/drives for years and after about 20 years the utility climbs again until ~150 years later when no living relatives remember the people in the photos. Then after a few more decades they have historical value. Hence the need to plan for long-term storage.
Take UDF. Expand it to the PB realm, not the existing 2TB. Add some ZFS features like ditto blocks, 64-128 bit CRCs, cryptographically signed writes with public keys, standard encryption, standard compression, ability to duplicate the filesystem as an image (so rsync utilities are usable to preserve hierarchy), snapshot directories a la OneFS/WAFL,
ZFS is probably your best bet for now. Oracle built filesystem-level encryption into the Solaris offering, no luck for the free versions. No cryptographic signing of writes, but that is imho overkill when you have to trust the whole kernel and filesystem layer and so whole-disk encryption plus SHA256 checksums gives basically the same assurance that no data has been modified. You can hold snapshots in ZFS to prevent them from being accidentally deleted and treat them as basically WORM.
So from your smallest box 3x 3TB = 9 TB of data, and Glacier and Google nearline (maybe others too?) are charging $0.01/GB-month, so about $90/month if you back up the whole thing. I don't know how much you pay for electricity in both locations, but if a box can run/idle at 100W and you leave it on all the time you spend ~900KW a year. At $0.20/KWh that's about $180/year per server. Disks every 3 years (if you get HGST's warranty) is $140/year (using $0.035/GB rough cost today), or $27/month per server for ongoing costs not including replacing the other hardware periodically. $54/month vs. $90/month? Sure, it's a little cheaper. If you wanted one box and one online service it makes running your own look better; $120/month vs $54/month. What about connectivity at both sites? If you are already paying an ISP for other reasons at both ends that's one thing, otherwise throw another ~$50/month on top of at least the backup server cost. AWS and Google appear to currently charge $0/GB for incoming transfers. Of course if you can get deals on cheap drives and run them past the warranty in a state with cheap electricity (or in a dorm room with free Internet/electricity) it's a lot cheaper.
As for security, encrypt before copying anywhere. You might as well be running local disk encryption too so you never have to worry about returning a disk with plaintext for warranty repair. I don't trust any company to keep the data I upload secret (FISA courts, NSA, bla bla bla), so encrypting incremental ZFS snapshots and uploading them is an efficient way of maintaining an offsite backup. I only have 1TB I care to back up this way so it's less sticker shock each month, but I still find it amusing that the first box I built was 4*320GB RAID5 and now that costs $9/month.
That said, you could probably use a synchronized random number generator as the shared pad data. The other side would only be able to decrypt messages for as long as they buffer the random number data; after which the message is lost to everyone for eternity. This could work for a TLS session where messages are exchanged with only a couple minutes (or preferably seconds) delay so that the buffer does not need to be very big.
That's roughly the definition of a stream cipher (e.g. RC4 or a block cipher in Counter mode). Only a cryptographically secure random number generator works, which is why such a thing is called a stream cipher and not just a "pseudo-random one time pad". In any case it's not a true one time pad because the entropy of the stream of pseudorandom data is limited to the entropy of the internal state of the cipher, and further limited by the entropy of the key. That means stream ciphers can be broken given only the ciphertext, as opposed to using a one time pad. Stream ciphers also share the same weakness as one time pads; reusing the same stream cipher key is just as bad as reusing a one time pad (virtually automatic recovery of all plaintexts encrypted with the same pad/stream).
For high throughput/IOPS requirements build a Lustre/Ceph/etc. cluster and mount the cluster filesystems directly on as many clients as possible. You'll have to set up gateway machines for CIFS/NFS clients that can't directly talk to the cluster, so figure out how much throughput those clients will need and build appropriate gateway boxes and hook them to the cluster. Sizing for performance depends on the type of workload, so start getting disk activity profiles and stats from any existing storage NOW to figure out what typical workloads look like. Data analysis before purchasing is your best friend.
If the IOPS and throughput requirements are especially low (guaranteed < 50 random IOPS [for RAID/background process/degraded-or-rebuilding-array overhead] per spindle and what a couple 10gbps ethernet ports can handle, over the entire lifetime of the system) then you can probably get away with just some SAS cards attached to SAS hotplug drive shelves and building one big FreeBSD ZFS box. Use two mirrored vdevs per pool (RAID10-alike) for the higher-IOPS processing group and RAIDZ2 or RAIDZ3 with ~15 disk vdevs for the archiving group to save on disk costs.
Plan for 100% more growth in the first year than anyone says they need (shiny new storage always attracts new usage). Buy server hardware capable of 3 to 5 years of growth; be sure your SAS cards and arrays will scale that high if you go with one big storage box.
and your HR department is paying "competitive wages" at the 50th percentile?
Let me know how that works out for you.
"It's a dog-eat-dog world out there, and I'm wearing Milkbone underware." -- Norm, from _Cheers_