Commerce Commission ruling on copper

This morning’s announcements from the Commerce Commission
suggest we need a major rethink on the way we price regulated services in the
telecommunications industry.

We’ve had two recommendations handed down today – the first
around the price of unbundled local loop lines and the second around the
wholesale price of the product
most ISPs resell – UBA (unbundled bitstream
access).

It’s important we step back a little and compare the two
product sets. On the one hand, wholesale isn’t really wholesale – it’s resale.
In effect, every ISP that sells the Chorus UBA product sells a virtually
identical product to every other ISP. You can change the colour of your
advertising campaign, but the product is basically the same. There are some
variations – enhanced UBA versus basic UBA – but in essence it’s one size fits
all. It’s what we’ve always had – a re-badged product based on a standard set
of inputs.

On the other hand, unbundled services leave far more up to
the individual ISP or telco. They can define parameters like contention rates,
committed information rates and so on. They also get to pay less money to
Chorus, meaning they can invest more in the hardware or offer these differentiated
products at a competitive rate.

All of the competition we’ve seen in the fixed line
broadband market in the past five years has come from the unbundled players and
it’s precisely because of this competition that any changes to the pricing structure
need to be closely examined. I want a competitive market that has energy and
which delivers products and services that customers want. I don’t want a
government-mandated product offered at a government-mandated price because
that’s not going to deliver the drive we so badly need.

Let’s look at the prices. Unbundled access was de-averaged –
that is, urban folk paid less than rural. Actually, that’s not quite true –
customers in urban areas cost the telcos less than customers in rural areas –
the customers themselves all paid the same price, assuming they could get
unbundled service.

The Commission was told to average the prices and come up
with a new price point for all services, rural and urban. Chorus argued that
the existing prices should simply be added together and divided by two – the
competitors argued that the Commission needed to compare our figures with
services offered overseas and that the price should come down considerably.
Given this is the only opportunity to review prices for the foreseeable future
(once they have been introduced the Commission will stop its annual price
assessment regime), the ISPs and telcos are very keen to make sure the price is
right for the remainder of the decade.

Chorus argued that we need to consider uptake of UFB and
that too low a price-point for copper would mean customers have no incentive to
move to fibre. TUANZ believes the opposite is true – that if customers are to
be encouraged to take up fibre they need a reason and that reason will be found
in the kinds of products and services that will be developed firstly on a
faster copper network and then (once it’s available) on fibre. Without those
drivers, the only way we’ll see mass uptake of fibre is if customers are forced
to migrate. There should be no need for that if the incentives are set
correctly.

The Commerce Commission draft decision set the price at
$19.75 per line, but in the final report it sets the price at $23.53 a month
per line – a reduction of 3.85% on the average price set in 2007.

This is clearly a huge win for Chorus and could potentially
cause problems for those telcos and ISPs that have unbundled and wanted a lower
price to extend their unbundled offers further into the network.

It’s not much of a change, however, so taken in isolation we
must shrug our shoulders and move on.

Let us turn to the wholesale pricing and what’s going on
there. Instead of continuing with the current “retail minus” approach, the
Commission has moved to a cost-based model, something TUANZ heartily agrees
with. Retail minus means we never really see the real price for a wholesale
service on the grounds that it’s relatively easy to game. Prior to separation,
all Telecom had to do was maintain a couple of high-end products that nobody in
their right minds would buy and the retail-minus approach meant competitors
were forced to pay more for wholesale service .

Of course, now Telecom is separated, Chorus doesn’t have a
retail service to consider and so the move to cost-based services is entirely
appropriate.

Here the Commission has dropped the price from $21.46 to $8.93
for the basic service (basic UBA) . Enhanced UBA services receive a similar
price drop.

This will be great news for ISPs reselling Chorus’s
wholesale product and means we should see more aggressive pricing in the
wholesale market from the various ISPs assuming the price point is introduced
in two years’ time as expected (there’s plenty of debate on the wholesale price
yet to come I’m afraid and the Minister’s press release makes it clear we may
need to develop a uniquely New Zealand methodology for wholesale pricing).

What’s not to like about that, you say? Well, again, taken
in isolation it’s a great outcome. Prices will fall for UBA-based services in
two years’ time when it’s introduced. But this isn’t an isolated market –
instead we have to consider what will happen with both wholesale and unbundled
products.

On the one hand we’ll hopefully see a huge fall in prices
for wholesale service while unbundling will continue for those that have it but
not be extended out any further. 
Instead, we will likely see those plans for increasing the number of
unbundled exchanges and cabinets come under threat as the business case for
unbundling becomes squeezed by better pricing in wholesale.

There is still plenty of life left in unbundling as we know
it. Not only are there more lines yet to be served in existing exchanges (I’m
told unbundling accounts for only a relatively small percentage of the total
lines running through those exchanges) but there is still revenue upside to the
service in terms of offering VOIP services instead of the plain old telephone
service.

But it does smack of the end of unbundling’s brief but
glorious day in the sun and as customers we should be unhappy about that.
Unbundling offers a leg-up to those companies that do make the effort to
install their own equipment and has done tremendous things for the customers of
New Zealand who have been able to get it. I’d like to have seen that run
extended, but it’s not to be.

Most residential customers won’t be getting connected to the
fibre network for the next four years, which means these prices are going to be
our guiding light until 2016 or so.  If
that’s the case, we’ll need to look very closely at the VDSL product set and
work out whether that will be an acceptable substitute for the time being.
Currently Chorus charges a premium for the service (around $20 extra per line
per month) which discourages ISPs from offering it. If that’s the case that may
well be the next job for the Commerce Commission.

Your [0800] call is [not] important to us

I’m a bit of a Woody Allen fan. His later films are a bit tedious (although I liked “Vicki Cristina Barcelona”) but in the early works his genius shines through.

I remember one character (although I have no idea which film it was in) who would arrive in a scene and immediately ring his office to tell his secretary (no PAs back then) what number he’d be at and when he’d be leaving, and what number to call him on at his next location.

It neatly summed up a world where business leaders are required to be in touch but where technology simply hadn’t caught up with that need.

Thank the gods for mobile phones, I say. They’ve helped cut that tie, freed us to work from wherever we need to, whenever we need to.

I’m typing this on my iPad from Auckland’s water front looking at Team Prada take their catamaran out past the Harbour Bridge because I wanted to get out of the office for a bit. This is a good thing.

Mobility is one the key drivers of revenue growth for the mobile phone companies, it’s one of the great drivers of the overall telecommunications era in which we live and it’s a critical component of most of our lives these days.

All of which makes me wonder just why it is so many 0800 numbers tell me to hang up and call again from a landline.

Why is it that free phone 0800 numbers are off-limits to the very devices most callers use?

The answer, sadly, lies with termination rates. Or rather, in this case, origination rates.

The Commerce Commission decision on the price of calling a mobile was to regulate the termination rate down to a more reasonable number. Unfortunately, 0800 calls were left out of this determination, because they don’t attract a “termination” rate, but rather an “origination” rate. Users don’t pay to make the call, the recipient pays to receive that call and so the decision on termination rates doesn’t apply.

Which means there’s no incentive or requirement for 0800 providers to lower their prices at all, and so they haven’t.

If you have an 0800 number for your business a mobile call lasting one minute will cost you four times as much as a call from a landline and so many 0800 users simply block incoming calls from mobile phones. They can’t afford that level of cost in their business and so they deny their customers that ease of access.

Customers, as they always do, bear the cost of this. Instead of being able to call from their chosen device, they have to resort to other channels to communicate with these organisations. I resort to Twitter for the most part, but not all communications with corporates or other entities can be conducted by tweet.

In this day and age, is it appropriate that 0800 number providers can force different pricing on buyers of the service who in turn must decide whether to wear the costs or restrict their customers’ ability to make contact? Is it all about being customer friendly and obsessed as we’re told, or is it about money grubbing?

Have you had any experiences you’d like to share with regard to 0800 numbers? Please share below.

Shop till you drop

The Australian government is looking at the vexing issue of
international companies charging more for products in Australia
than they
elsewhere in the world.

It has become apparent in recent years that in our corner of
the world we pay well above the average for all manner of products. While there’s
some justification for charging more for large items because of shipping costs,
there seems little justification for charging more for software or smaller
consumer items.

The big corporates will tell us it’s all about price points,
about what the local market can bear and what is deemed acceptable in each geographic
location.

The price for calling on a mobile in India is a fraction of
the cost of calling on a mobile in New Zealand but the downside is you have to
live in a country with a billion citizens and all that goes with it.

So do we get charged more here? Should we enact laws to
change this?

Who knows. Certainly our government has said nothing on the
matter, despite the Australian inquiry. We’re remarkably silent on the issue of
what is being described in Australia as “price gouging” by the electronics
industry in particular
.

Indeed, when Adidas decided we should pay more for World Cup
jerseys
simply because we’re New Zealanders and are likely to be more willing
to pay more, we didn’t make too much of a fuss, we simply voted with our
wallets and bought online. Until they decided not to sell online to anyone with
a New Zealand based IP address
.

I know of at least one major corporate in New Zealand that
buys all its software via a US subsidiary because it saves around 30% on the
asking price. CHOICE Australia’s submission to the government hearing on the matter
paints a darker picture – price differentials of up to 50% on software, content
and electronic goods.

That software prices can vary by that much puts the lie to
the idea that corporates simply try to hit local currency sweet spots and
reveals the truth of the matter: they will charge what the market can bear, and
without legislative support, we apparently can bear to pay more.

When you combine this pricing structure concept with the
corporates’ cavalier attitude towards taking part in these kinds of inquiries
and also their unwillingness to pay tax to support local jurisdictions, we start
to paint a picture of a world where the corporates increasingly control the ebb
and flow of commerce and the governmental structure is increasingly irrelevant.

I can only presume our own government isn’t interested in
pursuing these corporates out of fear they’ll simply stop selling products to New
Zealand altogether. That somehow the corporates are willing and able to take
their ball and go home.

Corporates, of course, are coin-operated; they will go where
the money is and so long as we show we’re willing to shop, they’ll be willing
to sell. Already we see NZ Post offering a US address to shoppers so we can buy
online and import directly from those companies that decline to sell outside
the US itself. That NZ Post, a government-owned agency, is willing to do that
speaks volumes about the issue.

But there is another issue at stake – tax revenue. New
Zealand, like most western countries, now gathers a significant proportion of
its tax take from GST. Shoppers who buy goods online often end up paying less
tax locally than shoppers who buy from a New Zealand-based vendor.

That will have huge ramifications for governments in the months
and years ahead.

Meanwhile the best advice would be if you want to pay less
for exactly the same product, you’ll do well to lie about where you live and if
you want a government that will stand up to corporates, you might want to
consider Australia.

Guest Post: Data havens and the constitution

Guy Burgess is “a New Zealand lawyer, software developer, consultant, CEO of LawFlow, and feijoa farmer” as well as a chatty fellow on Twitter. He’s written about our data haven concept from the all-important legal perspective.

TUANZ CEO Paul Brislen has written a thought-provoking article on the prospects of turning New Zealand into a data haven. There’s a lot going for the idea, but as Paul notes, there are a couple of stumbling blocks, one of which is the legal situation:

The final problem then, is the legal situation. We would need to become the neutral ground, the data Switzerland if we’re to gain their trust. Publicly adhered to rules regarding data collection and retention. Privacy built in, access only under the strictest conditions.

It would indeed require some law changes to become a “data Switzerland” where, as Paul envisages, “we treat bits as bits and that’s that”, and don’t allow the Armed Offenders Squad to swoop in with helicopters if someone uploads the latest series of Mad Men.

Exactly what those laws would be is a huge kettle of fish: privacy rights, intellectual property rights, safe-harbour provisions, search-and-seizure, criminal and civil procedure, etc. But putting aside the content of those laws (and their desirability), it is worth noting that New Zealand is in a somewhat disadvantageous situation in one respect vis-a-vis most other countries. Whilst New Zealand ranks as one of the most politically stable, corruption-free, and rule-of-law-abiding countries – ideal attributes for a data haven – we are in the very rare category of countries that are both:

  • Unicameral, unlike Australia, the UK, the US, Canada, most of the EU, Japan, India, and others; and
  • More importantly, have no written constitution that entrenches rights, limits Government power, and can strike down non-compliant laws. Only a handful of countries (notably including the UK) are in this category (and this is putting aside Treaty of Waitangi complications).

By my quick reckoning, the only other country with both of the above attributes is Israel.

What this means for us, as Sir Geoffrey Palmer wrote many years ago, is that whoever is the current Government of the day has unbridled power. Theoretically, there are little if any limits on what can be passed into law – all it takes is a 1-vote majority in the House of Representatives. This includes major constitutional change and retrospective law. For example, in the past decade-and-a-bit we have seen a Government change New Zealand’s highest Court from the Privy Council to a new domestic Supreme Court on anarrow majority, and retrospectively amend the law (also on a slim majority) to keep a Minister in Parliament – both things that may may well have faced constitutional challenge in other countries, but here were able to be effected with the same legislative ease as amending the Dog Control Act.

What’s this got to do with becoming a data haven? Well, it means that we cannot give the highest level of assurance that a future Government won’t do certain things that might undermine our data haven credentials.

For example, being a true data haven would presumably mean strong freedom of speech laws. You would want a reasonable assurance that a data centre would not be forced to hand over or delete data due to hate speech laws (present or future), except perhaps in the very strongest cases. New Zealand does have its peculiar Bill of Rights Act covering matters such as free speech, but this does not limit parliamentary power – in fact, Parliament regularly tramples various provisions of the Bill of Rights Act, with the only requirement for doing so being that the Attorney-General must inform the house. Nor does it prevail over inconsistent Acts: if another Act removes or abrogates a right, then the Bill of Rights Act doesn’t change that. So Parliament could potentially pass a law, on the slimmest of margins, that limits freedom of speech. This is not as far-fetched as one might think in an “open and free” democracy: the process is well advanced in the UK, where people face arrest and criminal prosecution for making statements considered by the authorities to be “insulting” (such as calling a police horse “gay”). Could this extend to limiting free speech (or content) hosted in data centres? There is nothing that says it can’t, or won’t.

Compare this with the US, where most of the internet’s infrastructure, governance and data centres are located. The federal Consitution provides the highest protection possible against Government limitation of free speech. Now this obviously does not (and is not intended to) stop situations like a US federal agency shutting down Megaupload and seizing data, in that case partly on the basis of alleged intellectual property infringement. But at least the limits on what the US Government can do are constitutionally defined and proscribed.

This issue is obviously much broader than data centres, but it does highlight the question: is it acceptable, in the information age, for there to be no effective limits on Government power over our information?

Guest Post: UFB for Dummies

Steve Biddle likes to describe himself as a former trolley boy but nobody believes him about that (it’s true, I swear) so we’ll just call him a network engineer with a passion for explaining things simply.

Steve posted this to his blog over on Geekzone but kindly allowed me to republish it here.

Unless you’ve been living on another planet you’ll be aware that New Zealand is currently in the process of deploying a nationwide Fibre To The Home (FTTH) network. This network is being supported by the New Zealand Government to the tune of roughly NZ$1.5 billion over the next 10 years and is being managed by Crown Fibre Holdings (CFH). Work is presently underway deploying fibre nationwide, with several thousand homes now connected to this new network.

Much has been made of UFB retail pricing, and for many individuals and businesses the price they will pay for a UFB fibre connection could be significantly cheaper than existing copper or fibre connections. What does need to be understood however is the differences between fibre connection types, and pricing structures for these different services. There have been a number of public discussions in recent months (including at Nethui in July) where a number of comments made by people show a level of ignorance, both at a business and technical level, of exactly how fibre services are delivered, dimensioned, and the actual costs of providing a service.

So why is UFB pricing significantly cheaper than some current fibre pricing? The answer is pretty simple – it’s all about the network architecture, bandwidth requirements and the Committed Information Rate (CIR). CIR is a figure representing the actual guaranteed bandwidth per customer, something we’ll a talk lot about later. First however, we need a quick lesson on network architecture.

Current large scale fibre networks from the likes of Chorus, FX Networks, Citylink and Vector (just to name a few) are typically all Point-to-Point networks. This means the physical fibre connection to the Optical Network Terminal (ONT) on your premises is a dedicated fibre optic cable connected directly back to a single fibre port in an aggregation switch. Point-to-point architecture is similar to existing copper phone networks throughout the world, where the copper pair running to your house is dedicated connection between your premises and the local cabinet or exchange, and is used only by you. Because the fibre is only used by a single customer the speed can be guaranteed and will typically be dimensioned for a fixed speed, ie if you pay for a 100Mbps connection your connection will be provisioned with a 100Mbps CIR and this speed will be achieved 24/7 over the physical fibre connection (but once it leaves the fibre access network it is of course up to your ISP to guarantee speeds). Speeds of up to 10 Gb/s can easily be delivered over a Point-to-Point fibre connection.

The core architecture of the UFB project is Gigabit Passive Optical Network (GPON). Rather than a fibre port in the Optical Line Terminal (OLT) being dedicated to a single customer, the single fibre from the port is split using a passive optical splitter so it’s capable of serving multiple customers . GPON architecture typically involves the use of 12, 24 or 32 way splitters between the OLT and the customers ONT on their premises. GPON delivers aggregate bandwidth of 2.488Gb/s downstream and 1.244 Gb/s upstream shared between all the customers who are connected to it. 24 way splitters will typically be used in New Zealand, meaning that 100Mbps downstream and 50Mbps upstream can be delivered uncontended to each customer. The difference is architecture is immediately clear – rather than the expensive cost of the fibre port having to be recovered by a single customer as is the case with a Point-to-Point network, the cost is now recovered from multiple customers. The real world result of this is an immediate drop in the wholesale port cost, meaning wholesale access can now be offered at significantly cheaper price points than is possible with a Point-to-Point architecture. GPON’s shared architecture also means that costs can be lowered even further since the architecture of a shared network means dedicated bandwidth isn’t required for every customer like is is with a Point-to-Point connection. The 2.488Gbps downstream and 1.244Gbps upstream capacity of the GPON network instantly becomes a shared resource meaning lower costs, but it can also mean a lower quality connection compared to a Point-to-Point fibre connection.

Now that we’ve covered the basics of architecture we now need to learn the basics of bandwidth dimensioning. Above we learnt that a CIR is a guaranteed amount of bandwidth available over a connection. Bandwidth that isn’t guaranteed is known as an Excess Information Rate (EIR). EIR is a term to describe traffic that is best effort, with no real  world guarantee of performance. The 30Mbps, 50Mbps or 100Mbps service bandwidth speeds referred to in UFB residential GPON pricing are all EIR figures, as is the norm with residential grade broadband services virtually everywhere in the world. There are is no guarantee that you will receive this EIR speed, or that the speed will not vary depending on the time of the day, or with network congestion caused by other users. With Voice Over Internet Protocol (VoIP) replacing analogue phone lines in the fibre world, guaranteed bandwidth needs to also be available to ensure that VoIP services can deliver a quality fixed line replacement. To deliver this UFB GPON residential plans also include a high priority CIR of between 2.5Mbps and 10Mbps which can be used by tagged traffic. In the real world this means that a residential GPON 100Mbps connection with a 10Mbps CIR would deliver an EIR of 100Mbps, and a guaranteed 10Mbps of bandwidth for the high priority CIR path.

Those of you paying attention would have noticed a new word in the paragraph above – tagged. If you understand very little about computer networking or the internet you probably just assume that the CIR applies to the EIR figure, and that you are guaranteed 10Mbps on your 100Mbps connection. This isn’t quite the case, as maintaining a CIR and delivering a guaranteed service for high priority applications such as voice can only be done by policing traffic classes either by 801.2p tags or VLAN’s The 802.1p standard defines 8 different classes of service ranging from 0 (lowest) to 7 (highest). For traffic to use the CIR rather than EIR bandwidth it needs to be tagged with a 802.1p value within the Ethernet header so the network knows what class the traffic belongs to. Traffic with the correct high priority 802.1p tag will travel along the high priority CIR path, and traffic that either isn’t tagged, or tagged with a value other than that specified value for the high priority path will travel along the low priority EIR path. Traffic in excess of the EIR is queued, and traffic tagged with a 802.1p high priority tag that is in excess of the CIR is discarded.

For those that aren’t technically savvy an analogy (which is similar but not entirely correct in every aspect) is to compare your connection to a motorway. Traffic volumes at different times of the day will result in varying speeds as all traffic on the motorway is best effort, in the same way EIR traffic is best effort. To deliver guaranteed throughput without delays a high priority lane exists on the motorway that delivers guaranteed speed 24/7 to those drivers who have specially marked vehicles that are permitted to use this lane.

There are probably some of you right now that are confused by the requirement for tagged traffic and two different traffic classes. The simple reality is that different Class of Service (CoS) traffic profiles are the best way to deliver a high quality end user experience and to guarantee Quality of Service (QoS) to sensitive traffic such as voice. Packet loss and jitter cause havoc for VoIP traffic, so dimensioning of a network to separate high and low priority traffic is quite simply best practice. Performance specifications exist for both traffic classes, with high priority traffic being subject to very low figures for frame delay, frame delay variation and frame loss.

UFB users on business plans also have a number of different plan options that differ quite considerably to residential plans. All plans have the ability to have Priority Code Point (PCP) transparency enabled or disabled. With PCP Transparency disabled, traffic is dimensioned based on the 802.1p tag value in the same way as residential connections are. With PCP Transparency enabled, all traffic, regardless of the 802.1p tag, will be regarded as high priority and your maximum speed will be your CIR rate. As the CIR on business plans can be upgraded right up to 100Mbps, GPON can deliver a service equivalent to the performance of a Point-to-Point fibre connection. Business users also have the option of opting for a CIR on their EIR (confused yet?). This means that a 100Mbps business connection can opt for a service bandwidth of 100Mbps featuring a 2.5Mbps high priority CIR, a 95Mbps low priority EIR, and a 2.5Mbps low priority CIR. This means that at any time 2.5Mbps will be the guaranteed CIR of the combined low priority traffic. The high priority CIR can be upgraded right up to 90Mbps, with such an offering delivering a 90Mbps high priority CIR, 7.5Mbps low priority EIR, and 2.5Mbps low priority CIR.

You’re now probably wondering about 802.p tagging of traffic. For upstream traffic this tagging can be done either by your router, or any network device or software application that supports this feature. Most VoIP hardware for example already comes preconfigured with 802.1p settings, however these will need to be configured with the required 802.1p value for the network. Downstream tagging of traffic introduces whole new set of challenges – while ISP’s can tag their own VoIP traffic for example, Skype traffic that may have travelled from the other side of the world is highly unlikely to contain a 802.1p tag that will place it in the high priority CIR path, so it will be treated as low priority EIR traffic. ISP’s aren’t going to necessarily have the ability to tag traffic as high priority unless it either originates within their network, or steps are taken to identify and tag specific external traffic, meaning that the uses of the CIR for downstream will be controlled by your ISP.

It is also worth noting that all of the speeds mentioned in this post refer only to the physical fibre connection. Once traffic leaves the handover point, known as an Ethernet Aggregation Switch (EAS) it’s up to the individual ISP to dimension backhaul and their own upstream bandwidth to support their users.

As part of their agreement with CFH, Chorus dropped their Point-to-Point fibre pricing in fibre existing areas in August 2011 to match UFB Point-to-Point pricing, which means customers currently in non UFB areas will pay exactly the same price for a Point-to-Point fibre access as they will do in a UFB area if they choose a Point-to-Point UFB connection. UFB GPON fibre plans won’t be available in existing fibre however areas until the GPON network has been deployed, either by Chorus or the LFC responsible for that area. In all UFB areas both GPON and Point-to-Point connections will ultimately be available.

I hope that this explains the architecture of the UFB network, and how connection bandwidth is dimensioned. It’s not necessarily a simple concept to grasp, but with the misinformation that exists I felt it was important to attempt to write something that can hopefully be understood by the average internet user. The varying plan options and pricing options means that end users have the option of choosing the most appropriate connection type to suit their needs, whether this be a high quality business plan with a high CIR, or a lower priced residential offering that will still deliver performance vastly superior to the ADSL2+ offerings most users have today.

And last but not least I have one thing to add before one or more troll(s) posts a comment saying fibre is a waste of time and complains about not getting it at their home for another 5 or 6 years. UFB is one of NZ’s largest ever infrastructure projects, and to quote the CFH website:

“The Government’s objective is to accelerate the roll-out of Ultra-Fast Broadband to 75 percent of New Zealanders over ten years, concentrating in the first six years on priority broadband users such as businesses, schools and health services, plus green field developments and certain tranches of residential areas (the UFB Objective).”

Residential is not the initial focus of the UFB rollout, and never has been. Good things take time.

GUEST POST: More on PBX security

Those of you with an attention span will remember we talked about PBX security a little while ago and Ben and I had quite a good discussion both in the comments and on Twitter about how important it all is.

Ben blogged on it and kindly allowed me to cross-post it here. Check out more from Ben at his blog.

A couple of weeks ago Paul Brislen posted a really good post on the TUANZ blog about PABX security. It seems some criminals had used a local companies phone system to route a huge number of international calls, leaving them with a colossal ($250k!) phone bill. These attacks are increasing common, and I have heard a number of similar stories.

Phone systems increasingly rely upon IP connectivity and often interface with other business processes, putting them in the domain of IT. But even if your PABX is from 1987 (mmm beige) and hasn’t been attacked yet, doesn’t mean it won’t be.

Both Telecom NZ and TelstraClear NZ have some good advice to start with, and you might find your PABX vendor can also give expert advice. Unfortunately many PABX systems are insecure from the factory, and a number of vendors don’t do a great job with security.

In a previous role I ended up managing several PABX systems spread across multiple sites, and learnt a few lessons along the way. Here are a few tips to get you started:

Have a single point of contact for phone issues – make it easier for users to change passwords, and get questions answered.
Educate your voicemail users, and work with them to use better passwords. Avoid common sequences like 0000, 1234 etc.

Document all the things! Make sure any company policies are documented and available (think about mobile phones etc too). Putting basic manuals on your intranet can really help new users.

Even if you outsource management of the phone system, make sure someone in your organization is responsible for it. And make sure this person gets more money!

Create calling restrictions, and put appropriate limits on where each line can call. If a line is only used for calls to local, national, and Australian numbers then that is all they should be able to call (don’t forget fax/alarm lines). Whatever you do, make absolutely sure that 111 (emergency services) works from all lines.

Standardise as many things as you can. Look at setting up system-wide call bars. Blocking 0900 numbers is a good start, and if no one will ever call Nigeria, it is a good idea to bar it. Make sure these settings are part of your standard config for new/moved sites.

Work with your vendor to ensure any root/master/service/vendor passwords are complex and unique. I have seen a vendor use the same service password everywhere, until a crafty hacker cracked it and then attacked many systems. Also talk to your vendor about a maintenance contract, and ensure they will install security updates in a timely manner. Restrict any remote service access where possible.

If you use auto attendants or phone menus, make sure they are secured too. Remove any option to dial through to an extension unless you are absolutely sure it is secure.

If you have multiple sites make sure that only appropriate calls can be routed between sites. Some phone hackers have been known to abuse site-site connections to work around restrictions.

If you have lots of sites, you may not always have control over the PABX, so work with your telco and have them restrict international calls as appropriate. Put this in your contract so it happens by default when you add/move sites.

If you have a mix of PABX systems/vendors at different sites, things can get very complicated and expensive, very quickly. Work on reducing the complexity.

Practice good IT security. Most PABX’s from the last 10+ years are Windows/Linux boxes (usually unpatched..) under the hood, and can be attacked over your network too (or used to attack your internal network!).

Ensure that both billing and system logging is enabled, and monitored. Otherwise a problem won’t be spotted until the next phone bill arrives.

The most important thing to take away is an awareness of the problem. Dealing with PABX’s can be complex. Don’t be afraid to get expert help. Your telco and PABX vendor are the best places to start. If you can’t get the support you need, change to one that will. If you have any advice, please add it below.

Latency

One issue remains to be resolved with the concept of New Zealand becoming a data centre for content and that’s latency.

We live in a remote corner of the world at least what, 120ms away from our nearest large market (sorry Australia – I’m talking about the US) and unless faster than light quantum computing becomes a reality in the near future, we’re not going to change that.

Latency affects all manner of services. Voice calls are notoriously impacted by lag, as is video calling and computer gaming. Too much lag will cause your secure VPN to fall over and that makes online banking or other reputation-aware services problematic.

But what can we do that isn’t time sensitive in such a way? YouTube for example, or streaming long-form movies – an activity that accounts for anywhere between a quarter and half of all US domestic traffic. What about Dropbox-like services or most of the new range of cloud computing activities (this blog, for example, is hosted by a company based in New York but I haven’t the faintest idea where the data is stored).

As Kim Dotcom said to NBR, HTML5 means multi-threaded downloads and uploads which means aside from the initial connection, lag isn’t an issue for services like web browsing, music or movie streaming or any of the rest of it. Local content caching is becoming the norm for such data and New Zealand could easily take a place in that market.

There is no reason any of those kinds of data can’t be stored locally and served to the world – the only impediments are a lack of competition on the international leg and a willingness to go out and sell that capability to the world.

Our distance to market has always been seen as a negative – let’s make it a positive. We’re remote, we’re stable, we’re “uninvadable” by anyone who might object to freedom of information and we have cheap, renewable energy. Give us a new cable and we won’t have to worry about the negotiations between Tiwai smelter and the power companies and New Zealand will be a net exporter of data and that means money.

Critical infrastructure

The National  Infrastructure
Unit has released its review of how things are going one year on from the
launch of the government’s infrastructure plan and apparently everything in
telecommunications is fine.

Much work has been achieved in terms of UFB and RBI says the
report, and the government is making steady progress towards re-purposing the
700MHz spectrum for telco use.

One thing worries the writers of the report (and bear in
mind that the NIU sits inside Treasury) and that’s whether or not the regulatory
settings encourage investment.

To be frank, I’m less worried about that because we now
(finally) have competition in large swathes of the telco market and that in
itself is driving investment. The Commerce Commission’s review of the sector
shows that, even before we consider the UFB spend of $1.5 billion from our
pockets.

I have another issue that the report barely touches on:
volcanoes.

Auckland is built on a nest of them. Fifty or so volcanic
cones litter the area with some long since extinct and used for tourism and
others still relatively new (Rangitoto) in geological terms.

What are the risks of another one popping up? Nobody knows.
Seriously, if you visit GeoNet  or any of the other sites that talk about volcanoes, all you get is reassurance
like this:

Auckland’s
existing volcanoes are unlikely to become active again, but the Auckland
Volcanic Field itself is young and still active.

Which is not very reassuring, if you ask me. Take this
information, for example:

The
type of volcanic activity in Auckland means each eruption has occurred at a new
location; these are coming from a single active ‘hot spot’ of magma about 100
km below the city. 

Which suggests that should a new volcano pop up, it’ll
literally pop up. No point checking out the existing cones – only Rangitoto has
had repeat eruptions in the past 250,000 years.

GeoNet has 11 monitoring sites around Auckland, but when you
ask the internet about predicting the next volcanic eruption you find it’s more
of an art than a science.

The Auckland War Memorial Museum cheerful tells us that it’s
not likely in our lifetimes:

“There have been 48 eruptions over
almost 250,000 years – one every 5,000 years on average”

But then goes on to say we don’t know when these eruptions
happened, so the averaging theory may not stack up. And after that we’re told:

There is no way of knowing when it will happen.

The last eruption (Rangitoto) was by far the
biggest.

And my favourite line:

Auckland’s volcanoes are powered by runny
basaltic magma, which rises through the crust quickly, at several kilometres
per hour. So when the next eruption happens, whether it’s tomorrow or in 5000
years’ time, we won’t get much warning.

What sort of damage will a volcanic eruption in Auckland
cause? Well there’s the initial blast radius, the shockwave radius, the lava
damage (if it’s the right kind of volcano), potential for massive amounts of
super-heated water to be flung about (if it’s in the harbour), ash clouds and
so on.

I’ve written a secondary school grade essay on volcanoes because
our country’s only serious international telecommunications link, the Southern
Cross Cable network, lands on either side of the Auckland isthmus (one in
Takapuna and one in Whenuapai) only 15 km apart smack on the top of an active
volcanic field.

A single eruption in the middle would potentially take out
both landing stations and then, we as a nation are stuffed.

All our international communications would be down for a
period of months at a minimum. Our ability to trade commodities online would
cease. Our government’s relations with other nations would grind to a halt –
and I’m not talking about our vital trade negotiations or our role in South
Pacific peace keeping, but mundane daily things like our ability to trade
currency and update the stock markets with information.

Any business based in New Zealand that tried to communicate
with customers outside New Zealand would be cut off. Our ability to let the
world know what was happening would slow to a trickle and you can forget us
recovering from that in a hurry.

I’ve always been ambivalent about the calls for another
international cable because of disaster recovery needs, but having taken a
closer look at our current situation I have to say it’s not pretty. We probably
won’t see a volcano pop up in our lifetimes, but the downside if it does happen
is quite severe.

Risk assessments must address two issues – the likelihood of
an event happening and the damage from that event if it does. In this case
we’ve got a low likelihood but a tremendous amount of damage should that event
occur.

As Wikileaks told us, the US considers the Southern Cross
Cable network to be a critical infrastructure that needs to be protected and that the Department of Homeland Security has included the landing sites in
its National Infrastructure Protection Plan (NIPP).

We have a single point of failure that’s got a tremendous
amount of risk associated with it.

We can mitigate that risk of course. A second cable that
lands somewhere else would do the trick, but the cost of laying one ($400m for
Pacific Fibre’s planned route) is prohibitive if all we’re talking about is a
DR plan.

Fortunately, we could use said capacity for other things and
we’ve discussed that elsewhere on this blog (and I’ll do so again shortly).

Any future second cable must include a requirement that it
land somewhere other than Auckland. The problem with that is that it increases
the cost to the builder but given that I expect the government would need to be
a part owner, I suspect that won’t be a problem.

You can see the government’s National Plan for
Infrastructure here
(WARNING: PDF) and it says telecommunications infrastructure is “able to deal
with significant disruption” and that the goal for the government beyond the
UFB and RBI projects is to make sure the regulatory regime works well. That’s
it, for the next 20 years, according to the plan.

In fact, the whole plan looks only at domestic
infrastructure, and I find that intriguing. Telecommunications is the only area
that is international in the way this report defines it. Gas pipelines, roads,
rail links, electricity production – it’s all domestic stuff. Only
telecommunications has that complete reliance on an international link, and yet
everything else relies on that link working continuously.

The report, quite rightly, focuses on Christchurch and the
rebuild work going on there. The report’s three year action plan, point seven,
is to “Use lessons from Christchurch to significantly enhance the resilience of
our infrastructure network”.

International telecommunications is a single point of
failure that could bring our economy grinding to a halt. It’s not about surfing
the net or downloading the next episode of our favourite TV shows, it’s about
ensuring we survive as an independent entity.

To quote from the report:

The Plan describes resilient
infrastructure as being able to deal with significant disruption and changing
circumstances. This recognises that resilience is not only about infrastructure
that is able to withstand significant disruption but is also able to recover
well. Buildings and lifelines that save lives are a priority, while the
earthquakes also revealed the resilience of widely distributed but highly
collaborative networks.

It’s time we looked at our international link in that light.

Strawman: How to save the New Zealand economy

Put aside the idea that it’s Kim Dotcom who wants to build a
new cable connecting New Zealand with the outside world for a moment and think
about what we’re really talking about here.

Firstly, we’re talking about building a data centre. Nothing
unusual in that – we have many dotted around New Zealand, some large enough to
register on the international scale of such things. Between Orcon, Vocus
Communications and Weta FX’s donation of the New Zealand Supercomputer Centre,
we have several.

But this would be orders of magnitude larger – something that
would either power Dotcom’s new Me.ga service or cope with the demands placed
on Google, for example. It needs to be robust, it needs to be multiply
redundant and it needs power. Lots of power. Green power. Fortunately we have
that and even better, the Tiwai aluminium smelter is apparently going to be
coming available soon and it requires 610MW to function. That’s 14% of the
national output, which makes for a scary conversation with government whenever
the smelter’s owners talk about packing up and leaving.

Google’s combined data centres use 260MW as best I can
fathom
which leaves us in a very good position to take over production of the whole
lot and do it entirely by green means. That’s quite important to a company like
Google, but don’t forget Facebook, Apple, Twitter (those tweets don’t weigh
much but by golly there are a lot of them) and all the rest. In fact this piece
in GigaOM nails it quite nicely  so have a look at why North Carolina is the place these guys base their mega
centres. Hint: power’s cheap there – cheap but dirty (61% of their electricity
is from coal, 31% from nuclear).

How cheap? They pay between 5c and 6c per kilowatt hour,
which is a really good price.

According to Brian Fallow in the Herald  Tiwai smelter pays 5c per kilowatt hour also, but don’t forget that’s New
Zealand money, not US money, so let’s call it 4c per kilowatt hour in American.

Clearly that’s a good price, plus it’s almost all green
which means a big gold sticker for any data centre using New Zealand.

So we’ve got electricity covered, if Kim gets his submarine fibre
built we tick off another huge problem. There’s not much we can do about the
latency between here and the US, so let’s ignore that thorny issue for now. We’re
conveniently located a long way from everyone so let’s move along.

There’s the issue of land which as we know is hideously
expensive in New Zealand. Unless it’s somewhere like Tiwai Point in which case
it’s not. I think we can tick that box off, particularly if you consider the US
pricing as your benchmark.

That leaves us with two major stumbling blocks. Firstly, the
staffing situation.

We need to produce enough graduates (or import enough
graduates) to staff this kind of monstrous facility and at the moment we’re not
doing that. We don’t have any push to get secondary school students into the
industry and we don’t have any long term plan to stop this incessant churning
out of management students and encourage kids into the world of ICT.

Without these kids coming through at all levels, we’re just
not going to get a data haven off the ground.

Because that’s what we’re talking about here – turning New
Zealand into a data haven where anyone can store data safe in the knowledge
that we treat bits as bits and that’s that. Nobody is going to trust us to look
after their data if we’re willing to send in the Armed Offenders Squad in a
chopper-fuelled moment of madness on the say so of some foreign national. It’s
just not viable.

The final problem then, is the legal situation.

We would need to become the neutral ground, the data
Switzerland if we’re to gain their trust. Publicly adhered to rules regarding
data collection and retention. Privacy built in, access only under the
strictest conditions.

But think of the upside – the PM talked about New Zealand becoming
a financial hub and while I get where he was coming from, that’s old school
stuff. Let’s become the home to all things data related instead. It turns our
long-time weaknesses (distance to market, isolation, relatively small size)
into strengths. Plus we’re New Zealand! Nobody’s going to invade us, we’re too
far away and too friendly.

Latency aside, what’s the downside of getting this done? If
we build the capacity we can attract a first mover in and if we have one, we
can attract more.

Customers from banks to insurance companies to individuals
to governments to movie studios (yes, you luddites, you) could make use of our
clean power, our isolation, our cheap land and our fantastic environment to
secure their precious bits and we would get a steady, reliable source of
revenue for the country that’s sustainable in all meanings of the word.

Have I missed anything? Why won’t this work? Is anyone
thinking about this?

Kim Dot Com or Kim Dot Come on?

If nothing else, Kim Dotcom’s tweet has reignited debated
about whether we need a competitive international cable market for New Zealand.

The short answer is yes, because we don’t have a competitive
international cable market. We have Southern Cross Cables and its network, but
we don’t have competition, and this concerns me. It also concerns most of the
telcos and ISPs I talk to and it should concern anyone who is supportive of New
Zealand building an internationally competitive digital economy.

In the old days we talked about the internet as the “freezer
ship” of the future. The internet could deliver the same increase in
productivity to the New Zealand economy that the introduction of freezer ships
did way back when.

It seems so quaint now, to think of the internet in terms of
commerce, when these days we use it to organise revolutions both social and
political, but there it is, the internet’s dirty little secret: it can be used
to make money.

I’ve said it before, repeatedly, and I’ll keep on saying it.
New Zealand stands to gain the most from the move to a digital economy. We can
export teaching instead of teachers, we can export intellectual property
instead of goods, we can export talent without losing those people. We can
export our business savvy, our capability, our cost effectiveness and our
willingness to get the job done. These things are all in short supply around
the world – we can fill that need and we can do it from here, without having to
go overseas unless we want to.

For that to work we need something that’s also in short
supply. Leadership. We don’t need more management, we need leaders with a
vision. We need someone to say “this is the plan, this is what we’re going to
do because it needs doing”, not “my staff didn’t inform me” or “of course I’m
worth my million dollar salary”.

If nothing else, Kim Dotcom is good at cutting through the
red tape and getting on with it. Love him or hate him, you have to give him
that. Want to share files? Build a file-sharing site. Want to play Modern Warfare
2? Lay fibre to the mansion and hook up your Xbox. Want to run an international
business from New Zealand but the infrastructure is lacking? Build the
infrastructure yourself and get on with it.

We need four things to get this digital economy off the
ground: international connectivity, cheap green power, an environment that will
attract talent and more students coming through the system keen to work in the digital
economy. Let’s focus on that for a while and see how we get on.

And if it takes an odd German with an odd name and a penchant
for the wrong German cars (get a Porsche) then so be it. We’re in no position
to be picky – let’s just get it build and then discuss the niceties.

I’ve heard from a number of TUANZ members who are keen to see something get off the ground. They see the need for competition on the international leg and were disappointed to see Pacific Fibre fall by the wayside.

Suggestions have ranged from a TUANZ tax on every telco bill to fund a build through to setting up a trust similar to the model used to build electricity lines around New Zealand. I’d be keen to see whether such a thing would fly – it would need the buy-in of some major telcos so we could add the  pennies per call or dollars per month to the bill, but that’s not insurmountable. 

What do you think? Would a publicly-funded project get off the ground?