Hurricane season is here. Are you prepared?

Source: http://www.noaa.gov/near-normal-atlantic-hurricane-season-most-likely-year

NOAA’s Climate Prediction Center says the 2016 Atlantic hurricane season, which runs from June 1 through November 30, will most likely be near-normal, but forecast uncertainty in the climate signals that influence the formation of Atlantic storms make predicting this season particularly difficult.

NOAA predicts a 70 percent likelihood of 10 to 16 named storms (winds of 39 mph or higher), of which 4 to 8 could become hurricanes (winds of 74 mph or higher), including 1 to 4 major hurricanes (Category 3, 4 or 5; winds of 111 mph or higher). While a near-normal season is most likely with a 45 percent chance, there is also a 30 percent chance of an above-normal season and a 25 percent chance of a below-normal season. Included in today’s outlook is Hurricane Alex, a pre-season storm that formed over the far eastern Atlantic in January.

“This is a more challenging hurricane season outlook than most because it’s difficult to determine whether there will be reinforcing or competing climate influences on tropical storm development,” said Gerry Bell, Ph.D., lead seasonal hurricane forecaster with NOAA’s Climate Prediction Center. “However, a near-normal prediction for this season suggests we could see more hurricane activity than we’ve seen in the last three years, which were below normal.”

Bell explained there is uncertainty about whether the high activity era of Atlantic hurricanes, which began in 1995, has ended. This high-activity era has been associated with an ocean temperature pattern called the warm phase of the Atlantic Multi-Decadal Oscillation or AMO, marked by warmer Atlantic Ocean temperatures and a stronger West African monsoon. However, during the last three years weaker hurricane seasons have been accompanied by a shift toward the cool AMO phase, marked by cooler Atlantic Ocean temperatures and a weaker West African monsoon. If this shift proves to be more than short-lived, it could usher in a low-activity era for Atlantic hurricanes, and this period may already have begun. High- and low-activity eras typically last 25 to 40 years.

In addition, El Niño is dissipating and NOAA’s Climate Prediction Center is forecasting a 70 percent chance that La Niña — which favors more hurricane activity — will be present during the peak months of hurricane season, August through October. However, current model predictions show uncertainty as to how strong La Niña and its impacts will be.

2016 Atlantic Hurricane Season Outlook.
2016 Atlantic Hurricane Season Outlook. (NOAA)

Despite the challenging seasonal prediction, NOAA is poised to deliver actionable environmental intelligence during the hurricane season with more accuracy to help save lives and livelihoods and enhance the national economy as we continue building a Weather-Ready Nation.

“This is a banner year for NOAA and the National Weather Service — As our Hurricane Forecast Improvement Programoffsite link turns five, we’re on target with our five-year goal to improve track and intensity forecasts by 20 percent each,” said NOAA Administrator Kathryn Sullivan, Ph.D. “Building on a successful supercomputer upgrade in January, we’re adding unprecedented new capabilities to our hurricane forecast models — investing in science and technology infusion to bring more accuracy to hurricane forecasts in 2016.”

Coming online later this season are major new investments to further improve NOAA’s ability to monitor hurricanes as they form and provide more timely and accurate warnings for their impacts. NOAA’s new National Water Model — set to launch later this summer — will provide hourly water forecasts for 700 times more locations than our current flood forecast system, greatly enhancing our ability to forecast inland flooding from tropical systems. In the fall, NOAA will launch GOES-R, a next generation weather satellite that will scan the Earth five times faster, with a resolution four times greater than ever before, to produce much sharper images of hurricanes and other severe weather.

NOAA works with a number of partners in the private and public sectors to ensure communities and businesses have the information they need to act well ahead of a land-falling hurricane.

“While seasonal forecasts may vary from year to year — some high, some low — it only takes one storm to significantly disrupt your life,” stated FEMA Deputy Administrator Joseph Nimmich.  “Preparing for the worst can keep you, your family, and first responders out of harm’s way. Take steps today to be prepared: develop a family communications plan, build an emergency supply kit for your home, and make sure you and your family know your evacuation route. These small steps can help save your life when disaster strikes.”

NOAA will issue an updated outlook for the Atlantic hurricane season in early August, just prior to the peak of the season.

2015 Atlantic hurricane season tropial cyclone names.
2015 Atlantic hurricane season tropial cyclone names. (NOAA)

NOAA also issued its outlook for the eastern Pacific and central Pacific basins. The central Pacific hurricane outlook calls for equal 40 percent chance of a near-normal or above-normal season with 4-7 tropical cyclones likely. The eastern Pacific hurricane outlook calls for a 40 percent chance of a near-normal hurricane season, a 30 percent chance of an above-normal season and a 30 percent chance of a below-normal season. That outlook calls for a 70 percent probability of 13-20 named storms, of which 6-11 are expected to become hurricanes, including 3-6 major hurricanes.

NOAA’s mission is to understand and predict changes in the Earth’s environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Twitter, Facebook, Instagram and our other social media channels.

Cloud Services – Stormy Short Term Forecast!

Several of our clients were affected yesterday and into today by a significant delay in the Google Postini service. Postini is a service provided by Google that pre-filters your incoming mail for spam and viruses. It also provides retention for your email so that if your server, network, connection is down your mail is stored until you are backup and your email is then forwarded to you. Postini by doing the pre-filtering removes the load for this process from your local server. This processing load can be significant. A fairly internet email active firm with 10 users can have something like 20,000 inbound spam messages a day and that becomes a load on the server so this is a nice feature.

However, this is all not so cool when your emails are delayed by hours as happened with the issue yesterday. Apparently no email was lost, but the delay was a real problem and for one of our clients in particular it was a crisis. Of course this Google outage will be all the rage in internet discussions – like this blog post!

This is a major event and Google will do all they can to avoid it in the future –yet they had another major outage back in May of this year. Is this performance level still an improvement over the potential loss of email from internal outages? We think so and we would bet on Google taking the needed steps. On the other hand this outage raises questions about “Cloud Computing” and the risks of putting all your eggs in the cloud so to speak. Microsoft and T-Mobile just last week had another major rain cloud when Sidekick users lost access to lots of personal data.

A factor to consider in this is the chain effect of risks –as you add more elements to the chain of processes for your electronic information your overall risks grows. If you have a main trunk internet service provider with fiber to you office and have your email come through an in-house server to users you have the risk of failure of that fiber line and the major provider, failure of your server and its software and finally failures at the desk top level. If you add another service provider using someone else’s data lines you add another point of potential failure, add in Google and you add several more potential points of failure, add in remote mail/Exchange hosting and you add in all of their points of failure as well as your connection risks.

There are offsets of course – by adding Postini you gain short term email retention, by adding hosting hopefully the hosting service has more system redundancy than your internal systems might have, etc.

Still in considering and moving to cloud computing we need to weigh the multiplicity of risks versus the benefits.

Hosted Exchange & The Cloud (Ooooh)

“The Cloud”.  The newest buzzword with all the hype in the computer and IT world.  In the next couple of posts we’ll be discussing The Cloud – what is it?  Should you use it?  Is it here to stay?  We’ll tell you everything you need to know to be an expert Cloudling – but for now, Hosted Exchange!

We have fewer clients all the time who are using Microsoft Exchange hosted on their own servers. More and more clients are using hosted Exchange with “cloud computing”. A number of our clients are using AppRiver which has been an excellent provider of hosted Exchange services. Now Microsoft is offering hosted Exchange for a lower price than AppRiver.  Also Microsoft is about to release Office 365 a hosted Outlook Exchange, Office and SharePoint cloud computing service bundle.

For a low rate per user per month you can get these full services. We will be monitoring these Microsoft services for recommendations to our clients. Windows cloud is a reality.

Exciting Times for IT!

We’re going to switch gears a little from our last post about SEO (from our very own SEO guru, Alyson) and talk about some of the exciting, new things in the IT services world.  I guess it’s true that the term “exciting” is relative, but to those of us here at IT Technology and Services, Cloud and Virtualization are definitely things to get worked up about!

Cloud and virtualization, SaaS and VMware –terms that were not much more than buzz words for the SMB space have become mainstream. VMware virtualization offers those of us in IT services a whole new tool set of solutions for disaster recovery, in addition to the many savings in hardware utilization and system management.

VMware offers the ESXi hypervisor for free, but just very small investment in the Essentials kit moves the implementation into a new world of vCenter and more powerful recovery tools. The next step up – with Essentials Plus, offers the VMware High Availability (HA) feature and moves a network toward the Holy Grail of instantaneous failover to a standby server. Well at least the Holy Grail if you are a computer geek or if high reliability is important for your business!

 

Next time, we’ll go over that mysterious term “The Cloud” – What is it? Why do you need it?  Stay tuned and you’ll find out!

Offsite Backups

I met with a prospect yesterday to discuss their IT systems and needs. They backup to a NAS device on site with no offsite backup except quarterly.  This a fairly significant operation with a number of locations and all the data returns to this headquarters office. So one small fire, water damage, theft, vandalism, broken pipe, tornado – hurricane etc. and this organization loses all of their data. It amazes me that folks can so blase’ about their operational data.  There are cost effective approaches to get the data backed up to at least be taken offsite!

There are also some very nice solutions that include fast onsite hard drive to hard drive backup combined with offsite backup and pre-configured server recovery from the backup device. We represent both Zenith and Barracuda solutions in this space.

The Barracuda Backup Service makes three backup copies of an organization’s primary data: one local backup and two offsite data backups to geographically-separate data centers.

Internet Connection Reliability

Is your business basically down when you lose your internet connection?  Our connection was out for a short time this afternon and our work started to grind to a halt. Crucially, our phones are out as well when the connection is down. We quickly forwarded our phones to cell phone lines, but it was still a painful reminder of our dependence on “the connection”.

Barracuda, among others, has a device to ease the pain and provide solid reliability.  Their “Link Balancer” device lets you plug in several internet connections and automatically distributes the load among them. Then, in the event of a failure on one connection the device switches your load to another live connection.  So if say your primary T-1 line goes down you can run on a DSL or cable connection until the T-1 is restored.  You may well have slow connection speeds and may have to curtail some activity, but you will not be dead in the water. You could also have redundant T-1 lines from separate vendors. However, if they run over the same AT&T or other common local carrier you could lose both at the same time – do a little research. It is likely that while one service is down, other types of service will still be functional.

When you have an  Internet link failure, the Barracuda Link Balancer will automatically route your  traffic to another available Internet connection without administrator intervention. The  Link Balancer will then check the offline connection so that you get fast reconnection when Internet service is restored. “By automatically detecting link health and failure, the Barracuda Link Balancer assists administrators by providing a worry-free redundant connectivity to the Internet. ” per Barracuda.

Barracuda has 2,3 and 6 connection devices available.

Seems like one way to have more business peace of mind – for  a price of course!

VMWare is Cool, or maybe “VMware is Hot”!

I have done some more studying of VMware and its product line and they certainly seem to have it together in their product offering. It should be noted here that virtualization technology predates VMware and that they have competition in Microsoft Hyper-V, Xensource, Red Hat, etc., but I am going to focus on VMware for now. The basic premise of the technology, for the uninitiated, is virtualization of servers (and desktops, but more on that later) on physical machines. A server running an application on a physical machine is “virtualized”  – that is the software, data, network interface card, RAM, cpu, storage, bios, etc. are all turned into code elements and run as a “virtual machine” on another server that can then hold a number of these virtual machines. The initial driving force of this technology was server consolidation. It is typical to be able to average reducing 15 existing servers to 1 after virtualization. There are obvious hardware savings to doing this as well as energy, maintenance and rack space savings.

Thanks to the wonderful world of competition the basic software tool that allows the virtualization of a server is available for free from both (and not by coincidence) VMware and Microsoft.   This tool is called a hypervisor and the latest VMware hypervisor is ESXi – again,  freely available.

The VMware world has moved way beyond the hypervisor itself – although that technology remains at the core. The main thrust of data center offerings by VMware is around central  management of servers for reliability, energy savings and efficiency of operations. This is where some of the the way cool stuff happens – once you get jaded with 15 or 20 servers running on one box!

The main VMware product is Vsphere which provides centralized management of the virtual servers, running ESXi  or ESX, under its control.  Aside from really efficient central management and control, some of the impressive features available include Vmotion which allows you to migrate a server from one host machine to another on the fly -while the server is running – with no loss of accessibility!  Other modules can monitor the load on a pool of servers and shift operating load so that some servers can be idle while others are fully utilized.  Those servers that are not needed can also be powered down and restarted when needed.

I mentioned virtual desktops earlier and this I think is really exciting technology. “Exciting? ” you might say.  While I am not excited by the average new technical gizmo, major shifts in how we provide computing capabilities to users, huge new markets and technical challenges are at the least very interesting. Running around and maintaining desktop PCs all through a big office is a huge waste of time and the whole PC interaction with its software and other devices is a a mess that as an engineer I have always felt was designed for kids, by kids!  The “VMware View” approach to enterprise desktops, to reduce desktops to virtual machines – basically files on a central server that can be copied, saved, recreated, provisioned for new setups, etc. all in minutes,  is a very powerful paradigm shift.  That the approach is already migrating to smaller environments as well is a given.  Big changes ahead and the change has great promise!

How to reduce IT costs

An enterprise VAR survey is quoted in the July 2009 INFOSTORE magazine issue on the “biggest opportunities” for customer to “reduce IT costs”. By far the biggest option was “Virtualization” with 49% of respondent mentioning it. The second choice was a surprising one – “data deduplication” – with 18% of respondents listing it. The #3 choice – way down at 4% was the not very innovative, “delay purchases”!
Data deduplication if you haven’t heard about it is an innovative way to reduce storage requirements. At a simple level if you store a 15MB email attachment on your network –there may be 10 or many more copies of that attachment in various mail boxes -all taking up storage space. Data deduplication would mean retaining just one copy with a pointer to that copy where the other copies would be. This concept can be carried down to the data block or bit level. An algorithm can assign a hash number to each string of data and store one data copy and the indexed hash numbers. In this way, your data storage requirement can be greatly reduced. So far, the main application for data deduplication has been in backup software. Note that there are risks –as with any data compression method – so care should be taken in selecting tools to do this job. Big firms with huge data storage requirements are obviously the first targets for the technology.
Virtualization – choice number 1 in this survey – is a money saver even for, and perhaps especially for, firms that are quite small. I say especially for small firms because you can get the first step copy of VMware or Microsoft’s HyperV at no cost. Now if you have one or two servers, virtualization is of no real utility, but when a special application, separate Exchange server, etc. comes along beyond that, virtualization can save costs and add powerful disaster recovery options. Of course the savings really grow as you get into more and more servers.