Newsletter: November 2017

New Twist in on-going Feud Between Google and U.S. Government

For months now, Google has been locked in a legal showdown with the U.S. Government over a Department of Justice request for access to 22 email accounts housed on Google servers. Google has repeatedly declined to cooperate with various demands and court orders on the basis that the data is housed in oversea servers and is not subject to United States jurisdiction.

The government has attempted to use various legal instruments to compel Google’s cooperation in the matter but to no avail. Their most recent attempt was to file a search warrant under the Stored Communications Act. Google complied in part by providing some of the requested information that happened to be stored domestically but is adamant in refusing to release data from servers outside U.S. territory and have filed a motion to dismiss the warrant.

Unfortunately for Google, U.S. District Judge Richard Seeborg has rejected Google’s arguments and is ordering immediate compliance with the warrant, regardless of where the information is stored. However, Google seems to have a few more legal tricks up their sleeve. Google’s legal team has filed for appeal and requested that the judge hold them in civil contempt and apply a $10,000 per day sanction for every day that they fail to comply.

While this seemingly bizarre legal tactic may seem like Google floundering, it may be much more clever than it looks at first glance. By moving to establish their own contempt and the monetary sanction that goes with it Google is accelerating the appeal process and putting the federal prosecutor’s office on the defensive.

The federal attorney working the case is arguing that “evidentiary hearings” must be held to determine the “equities at stake” in order to “properly devise a more severe sanction”. By throwing themselves on the fire, Google is attempting to avoid having a government hearing to determine just how much money they should have to fork over in order for a sanction to be effective and at the same time accelerating the pace at which their appeal will be processed through the courts.

If it will all pay out in the end has yet to be seen but as of now we can assume that Google is either: A) confident that they will win the appeal, B) Not at all worried about a $10,000 per day bill from Uncle Sam or C) Both. Only time will tell as this interesting but strange legal battle continues to unfold.

 

Hurricane Season is Over but The Costs Are Still Stacking Up; 5 Tips for IT Disaster Recovery Planning.

Two of America’s largest and most highly populated states are still on the long road to recovery after being devastated by Hurricanes Harvey and Irma. While damage estimates for Texas and Florida are still being calculated most sources have placed the monetary damages in the range of $150 billion.

“60% of companies that lose their data will shut down with six months” – National Archives & Records Administration

Hurricanes and other natural disasters are a fact of life, they are unavoidable. In light of this fact, one of the most important things a business can do is prepare. While insurance can help cover the physical damages caused by a natural disaster, how do you protect yourself from operational disruptions and the loss of critical data? The answer is simple, find a trusted partner to help build you an IT disaster recovery plan.

“93% of companies that lost their data center for ten or more days due to a disaster filed for bankruptcy within one year of the disaster.” – National Archives & Records Administration

One of the most overlooked aspects of disaster recovery planning is information technology. People tend to forget that our IT infrastructure is often the lifeblood of the company and do not prepare or do not prepare properly. Here are 5 basics tips to keep in mind when deciding on your IT disaster recovery plan:

1. IDENTIFY ALL CRITICAL SYSTEMS AND DATA

The rule of thumb here is if your business can’t operate without it, then you need it backed up. This includes mail servers, CRM databases, applications and even archived data. Don’t assume you can live without something

2. DECIDE IF YOU NEED A FULL MIRRORED ENVIRONMENT OR SIMPLE DATA REDUNDANCY

This one is simple, do you have customer facing websites, applications or other systems that live on your production servers? If so, then you probably need to look into having a secondary environment that you can fail over to should your primary go down. If not, then you might be able to get away with simply having a backup data storage to retain all of your important projects, databases and other info.

3. ONLY WORK WITH A TRUSTED PROVIDER

That small online company that offers disaster recovery for next to nothing? Probably not your best bet if you may need to rely on them in a pinch. Better to find a trusted provider with a proved track record that you know will be there when you need them the most.

4. FIND A GEOGRAPHICALLY SEPARATE FACILITY

That data center down the street? Not going to do you a lot of good when a tornado flattens both of your buildings or knocks out power across the whole city. While it might be nice to know that your disaster recovery solution lives next door and that you can check on it, maintenance it or upgrade it at your leisure, it will be worthless if it gets taken down by the same natural disaster. You are better off finding a data center that is AT LEAST 100 miles away from your primary environment.

5. ESTABLISH RTO AND RPO STANDARDS

This one might be less obvious. It is important that you decide on a Recovery Time Objective and a Recovery Point Objective that is realistic and will protect your business interests by having you back up in time to meet your customer’s needs. Figure out what this timeline looks like and talk about it with your disaster recovery solution provider. The good ones will work with you to support your requirements.

While this is not a comprehensive list, it should help get you headed in the right direction. Just remember, natural disasters can happen to anyone, don’t let yourself get caught by surprise.

 

Will Liquid Cooling be the Next Hot Thing for Data Centers?

With the onset of high performance computing and the rise of the GPU server in enterprise based applications, companies are facing a new set of hardware related issues. Namely, how to efficiently power and cool these extremely powerful machines.

While providing power presents issues of its own, the cooling of these machines seems to be the biggest challenge. This is an especially prominent issue in smaller data centers that want to run high-density racks full of GPU based servers. One potential solution which is already hugely popular on the consumer market is liquid cooling.

Liquid cooling units have seen widespread usage amongst e-sports gamers, power users and those seeking the biggest and best PC components. However, liquid cooling has seen relatively little use in enterprise based applications. Now faced with the rising prevalence of machine learning and AI development companies needing better cooling solutions have begun to turn to the consumer market and its plethora of liquid cooling options.

Enter Asetek, a company which manufactures many of the all-in-one liquid cooling units that have become so popular amongst gamers. They recently signed a contract to provide liquid cooling units for a NEC built supercomputer headed to a customer in Japan. According to Asetek this is not a one-off occurrence.

The company has said that while they are still receiving approximately 90% of their revenue from the consumer desktop market, the enterprise market is starting to grow. They report having several deals in the works to supply liquid cooling units for supercomputers headed to various government agencies including the US departments of Energy and Defense.

Nvidia, the company designing most of the GPU’s used in these super computers has also begun encouraging the use of liquid cooling to avoid the rampant heat issues and help support higher server density.

However, there are issues associated with liquid cooling. To be specific, they are much more expensive than traditional cooling methods, they draw more power and introduce risks associated with having liquid stored inside of an extremely expensive super computer.

Will liquid cooling begin going hand-in-hand with GPU based servers? Only time will tell.

Comments are closed.

Copyright © 2017 Prominic.NET