Posts

Guest post: Researcher evaluating the UFB deployment

Marlies Van der Wee is a student at the Ghent University in Belgium and she’s here studying with the University of Auckland School of Business looking at fibre deployments to compare the New Zealand approach with similar but different European models.

Marlies needs input from those in the industry who are involved in the UFB project to better understand the characteristics that surround the deployment.

More information on the project can be found here and contact details for Marlies are at the bottom of the post.

 

Evaluating the UFB deployment?

In New Zealand the government is investing $1.5 billion in the Ultra-Fast Broadband deployment, giving the possibility to 75% of New Zealanders to have a Fibre-to-the-Home (FTTH) connection by 2019. This nationwide project however contrasts sharply with the local and regional initiatives taken in Europe. Most FTTH deployments there are managed on a town or city-level, and frequently steered by local stakeholders, such as utility companies, municipalities, housing organizations, etc. 

Good examples of European cases include Stokab, a publicly-owned company that deployed dark fibre in Stockholm, Sweden, and leases this passive infrastructure to all interested parties (telecom operators, but also large businesses, media companies, health institutions, etc.). Another successful deployment can be found in The Netherlands, where third party operator Reggefiber (established by the private equity firm Reggeborgh) deploys FTTH in all areas where sufficient demand is guaranteed. Private initiatives are mostly driven by competitive networks, as is the case of Portugal, where the national incumbent is deploying FTTH in order to compete with the TV offer of ZON Multimedia, and stimulated by the regulator, who did not introduce any unbundling obligation.

Although all initiatives are striving towards the same goals in the end, the paths taken differ strongly and lead to varying degrees of influence on the performance of the different deployments. The question now arises if a relationship can be found between the characteristics of these different initiatives and their performance in terms of speed of deployment, coverage and uptake.

Evaluating and comparing FTTH deployments in New Zealand and Europe is exactly the topic of the research project performed by two researchers: Fernando Beltran (University of Auckland, Business School) and Marlies Van der Wee (Techno-Economic Research Unit at Ghent University, Belgium). Marlies arrived in Auckland last month, and will work at the University of Auckland until the end of February.

The project looks into the interaction between technology, policy and the market, covering a range of issue areas such as the financing structure and economic investment model, the operational business model including open access or unbundling obligations, the wholesale regulation on both copper and fibre, the impact of retail pricing schemes and the development of competition both at the infrastructure level (inter-platform competition) and the retail level (intra-platform competition).

The goal is to analytically compare these different FTTH deployments while linking the analysis to measurable performance criteria such as speed of deployment, coverage and uptake, which will allow the researchers to assess the deployments on the basis of their efficiency and effectiveness. 

In the end, the project aims to draw conclusions on the most important influencing characteristics driving FTTH deployment and uptake. The comparison of New Zealand and Europe furthermore brings the added value of broadening the view into different political, technological and market structure background settings.

Although this project is initiated by the academic community, the best outcome can only be achieved if the major actors in the field are willing to discuss, contribute and share relevant data. 

With this post, we would therefore invite all interested partners (telecom operators, service providers, user organizations, authorities, etc.) to contact us and support this analysis from their own perspective and with the information they can provide.

Contact info:

Fernando Beltrán

DECIDE: University of Auckland Business 

Decision Making Lab, Co-director

PING Research Group, Director

ISOM Department

The University of Auckland Business School 

New Zealand 

+64 09 923 7850

+64 021 3502282

f.beltran@auckland.ac.nz

 

Marlies Van der Wee

Techno-Economic Research Unit

Internet Based Communication Networks and 

Services (IBCN)

Department of Information Technology

Ghent University – iMinds

Belgium

+64 021 2930267 

marlies.vanderwee@intec.ugent.be

m.vanderwee@auckland.ac.nz

A detailed description of our project can be found here

Guest post: Education System part two

In New Zealand education policy, the failure of Talent2 and the Ministry of Education to build and deliver the Novopay payroll system has been widely covered in the media. This failure occurred for a number of technical and contractual reasons, many of which have been laid out in reviews.
The recently released Ministerial inquiry lays the blame for failure in a range of places. But in reading some of the Novopay timeline details, and the technical review, you get the sense that during the process, failure was not tolerated or accepted.

Despite concerns,  and reports detailing “147 software defects and 6000 errors“, the contract was still signed off, and the project went ahead. The Minister of
Education, the Associate Minister of Education and the Minister of Finance all
signed off on the project, despite knowing there were defects in the system.
Four independent advisors gave the system the go ahead, despite, we can only
assume, having a similar level of knowledge about the weaknesses in the system.

On the face of it, it appears, the possibility of system failure was almost willingly ignored and was quite literally, not an option. Why if everyone appeared to know that the system was faulty, was it allowed to be launched? After the fact, it’s easy to say that remedial work is being done, and Talent2 is learning from each cycle of errors, and that more people are on the help desk, and the Ministry is addressing faults in internal systems and staffing, but it’s difficult to avoid the fact that it appears a fear of
non-delivery meant a failed delivery from day one.

Obviously hindsight allows us to critique the Novopay implementation process and the technical aspects. I do wonder how and who within the Ministry of Education has learnt from the failure. I wonder how schools have learnt from the failure, and have changes
been made to some of their processes? I wonder if their have been any measurable gains or successes provided by the Novopay system, for schools and the Ministry?

The Network for Learning, (N4L) one of this government’s flagship initiatives is another example of where it is arguable, the “failure is not an option” culture is in play. Touted as “unleashing learners” and with a promise to not deliver any school, less than it currently has access too, the Network for Learning has been almost crippled by huge levels of expectation and in some quarters, entitlement.

After the failings of the Novopay process, the government and the N4L itself is moving quite slowly through the procurement and implementation process. Is this to avoid being seen as a failure, and to avoid failing to deliver? This slowness and at times lack of communication has been criticized by some in the education sector. An education sector that can be notoriously demanding. Will high and entitled expectations mean the N4L is regarded as a failure when it finally arrives? Or even before it finally
arrives.

Will it be a success if the network and services can be built and delivered, but be unaffordable for the schools that need these the most? In the quest to create a “something” that will raise student achievement, will the N4L be defined and by test scores and measurements that it has little to no quantifiable way of affecting?  Or will the assessments be designed to measure the N4L’s services, rather than the
student’s abilities and needs that we actually want to address.

At its heart this Network for Learning is just a collection of wires and boxes, that allow schools to connect. What those connections allow are varied and exciting, but the network cannot make change of itself. How schools actively and critically choose to use those connections to meet the needs of their students will make the most difference.

I do hope the premise and promise of the Network for Learning succeeds, but I’d rather call it a ‘Network for Education’. It’s a provision to support the delivery of education in New Zealand. What learning emerges from that network could be amazing, can be plain and useful, but ultimately should just be a part of how we in Aotearoa provide for our young people.

Lest anyone think that I’m defending teachers or schools that display signs of failure, I am not. I’m conscious that there are poor practitioners in the education sector, and poor administrators, and that students can be letdown by the choices of those who we entrust to guide and support them.

But I don’t believe any teacher or school leader actively sets out to fail their students. By stating “failure is not an option”, the Minister sets up a false assumption that anything less than constant success, means our teachers and schools are failing, and thus changes must be made.

So let’s change this conversation in which we are constantly defining success and failure points, and measuring to meet those standards. Let’s have broader conversation that starts with saying “In education, success and failure are part of the process of learning.” While we don’t aim for failure, we don’t deny that it may occur. That will we constantly look to build on our failures and successes, to refine and make ourselves and our places better.

If as adults, we’re OK with both success and failure, neither too emotionally distraught, nor enthusiastically hyped, our students may come to see that gratitude in the face of success, and resilience in the face of failure, are what determine our well-being in this life.

And this will be worth celebrating.

Guest Post: Fear of failure and the education system

On Saturday I watched TV3’s The Nation’s piece on the
current Minister of Education
, Hekia Parata.

I was struck by the Minister’s phrase, from her maiden
statement to parliament,
in particular her comments directed at education.

We must adopt an uncompromising attitude that
failure is not an option
. All our other aspirations for economic growth,
raised standards of living, and national confidence and pride will flow from
getting these basics right.

“Failure is not an option.”

It is, on the face of it, a fine statement, that speaks to
conviction, emphatic-ness and a desire to accept nothing less than the very
best.  All laudable sentiments from a politician. 

And I don’t deny that this is just one sentence from a wider
speech, but language matters, and I believe a statement like that helps to
frame the culture of practice that a politician leads. Can we actually frame a
society wide conversation about public education with that blunt rejection of
failure? What happens to our systems if we reject failure as an option?

Instead of stating that “Failure is not an option”, and living by
that dictum, should we as @therepaulpowers tweeted, consider that “Failure
is quite clearly an opinion.” With that perspective, can we allow
considered and critical opinions to shape our conversation about what failure
actually is and means in practice?

As a culture, we celebrate moments of success, gold medals
and world records. But behind each of those moments are effort, toil and
setbacks. Those setbacks are a series of failures, that when persevered through
and built upon can lead to success. But as a culture, we don’t often reflect on
that effort and that long progression of failure, nor do we celebrate it. 

In sports there are many examples of failure being a
reality. These excellent basketball players have never held aloft an NBA championship, while these footballers never even
made it to the World Cup. 

Would we consider them failures?

The 2011 All Blacks were rightly hailed as successful as
they won the Rugby World Cup. That victory salved the reminder of 25 years of
incessant failure. It’s possibly useful to consider that in that same time
period France, competed in three finals, while the All Blacks only two.
Naturally we see the All Blacks as more successful, because they won the two
they were in, but France have arguably a more successful RWC record than the  All Blacks. 

It’s just that possibly, as a nation they don’t base their
entire cultural worth or success on their national rugby team.

Consider also, that in those 25 years, rugby fans were
privileged enough to witness the feats of some of the most outstanding players
to ever play the game. Christian Cullen, Tana
Umaga
, Andrew Mehrtens, Jonah Lomu,

Did these players fail? Depending on the criteria, absolutely. 

Were they successful players who achieved highly? Obviously. 

Steve Jobs is lauded as one of the pioneers and visionaries
of personal computing and consumer electronics. But not only was he let go by
the very company that he helped to found, he continued to make
mistakes after his return and
not all of Apple’s products have been successful.

Richard Branson has had over a dozen major ventures that
have gone bust under his watch,
and yet is widely hailed as a success and a entrepernurial leader.

James Dyson’s award winning bag less vacuum cleaner
took 5,127 prototypes and 15 years to get it right.” Even after the
success of that original product in 1993, Dyson has continued to refine and
continuously improve his product.  

All of their failures were a part of the successes these
three business leaders went on to create. As Dyson discusses in this article
from The Guardian, in business “Failure can be an option“. 

Sometimes though, we rewrite the rules, and despite failure
being the absolute state of reality, and being aware of the process by which
that point of failure was reached, we choose to define some things as “too
big to fail
“.

We didn’t allow the banks to fail. The results would have
been catastrophic we were told. But five years on from that financial crisis,
are we any better off? Have those institutes learnt from that failure? Did
declaring them unable to fail cause them to change their methods?  Have
our economies become more effective, balanced and useful as a result of not
being allowed to fail
?

I find it interesting that the World Bank now hosts a Fail
Faire
, to
celebrate “innovation and risk-taking”. 

Are we doing that in New Zealand. Politicians often call for
innovation and risk takers, but do we allow for and explicitly let failure
happen, so that we can innovate as a result. Do our public sector environments
allow for risk-taking and the possibility of both success and failure? Do we
have a public sector culture that lets the individuals within it learn from
their mistakes.

 

Part Two of Tim’s post goes online on Thursday.

Guest post – UK broadband goes mobile

UK mobile broadband infrastructure

The UK was
among the first countries in Europe to receive a consumer level 3G network,
launched by the mobile wing of British Telecom in 2003. Up until that point we
had, like everywhere else, struggled along with a 2G service – fine for voice
and texts but internet access could be painful even compared to dial-up.

It wasn’t
until 3G went live that mobile broadband became a real possibility, and it’s
proven popular: the UK telecoms regular Ofcom
estimated in 2012
that 13% of adults in the UK had a mobile broadband connection.

Mobile broadband not-spots

Despite the
enthusiasm with which UK net surfers have taken to mobile internet access,
coverage and performance remains an issue. The UK is not a big country and
we’ve had 3G for ten years so it would be reasonable to expect almost complete
saturation of mobile network signal, but in fact many gaps remain.

In towns and
cities you can generally rely on access to 3G, but it’s still not uncommon to
find ‘not-spots’ where the signal falls back to 2G or drops out entirely. And
once you’re out into the countryside the coverage becomes very patchy.

The network
operators claim to offer in excess of 90% coverage. In 2011 the BBC conducted a crowd-sourced experiment to explore mobile signal
throughout the UK and discovered that when a data connection was available
users were only able to get 3G around 75% of the time. The resulting map
revealed a large number of not-spots throughout the country.

That was two
years ago of course, but our own testing confirms that mobile broadband still
has a long way to go. In May we conducted our annual mobile broadband Road Trip, travelling from London to
Edinburgh while recording the performance of all major network providers.

With speed
tests, downloads and uploads and media streaming we were able to see how mobile
broadband handled practical tasks in a real situation.

Several
networks – notably Three, EE and T-Mobile – completed a large number of tests
and returned some excellent speed test results, some as high as 8Mb.

However
several networks failed to perform to a reasonable standard, with the weakest
managing to complete just 13% of the tasks. All the networks struggled with
streaming media, particularly video, none of them managing more than half of
the attempts.

This adds up
to a frustratingly inconsistent experience when using mobile internet on the
move. Provided the signal is available the connection can be incredibly fast,
particularly with the latest DC-HSDPA 3G networks providing 20Mb or more, but
all too often you’ll wander into an area with no connectivity and it will cease
to function.

The next generation

We are
hopeful this situation will improve, however, thanks to the recent introduction
of next-generation 4G mobile broadband.

The first 4G
network was launched by EE in October 2012 and is still fairly limited, but the
spectrum auction which doled out 4G frequency to the remaining providers came
with a requirement: they must commit to providing indoor coverage to 98% of the population

This should
mean that within the next couple of years we’ll see a significant improvement
in mobile broadband performance as the networks compete to offer the best
mobile internet. Not only will 4G bring much faster speeds but this caveat to
offer a minimum level of service will help reduce not-spots.

And
Britain’s Rural areas – currently poorly served by both fixed and mobile
broadband – may see the benefits too as mobile internet fills the gaps left by
our ageing telephone network.

Author Bio: Matt Powell is the editor for the UK broadband
comparison site
Broadband
Genie
, where he blogs on the
latest broadband and mobile broadband topics.

 

Guest Post: Data vampires

Guest post from John Allen of Rural Connect (originally posted 6 June). 

Broadband retail service providers have a tendency to waggle their finger at the consumer when things go awry.  But a review of the telco industry’s proposed Product Disclosure Code does not go far enough to banish this attitude.

The classic example of this blaming attitude occurred back in 2009/10 around Telecom’s ‘Big Time’ and ‘Go Large’ broadband plans.

Released in July 2009, the Big Time plan offered unlimited speed and, more importantly, uncapped data.

At the time, this was the only plan offering unlimited data, so thousands of customers flocked to sign up to it. Which of course is why it was offered.

The plan was pulled less than 12 months later because of an “extreme minority” that downloaded huge amounts of data.  Telecom managed data traffic by throttling speeds, but some users found a way around this, making the plan “increasingly hard to manage and keep in market.”

Telecom placed the blame for the plan’s demise on its customers.  It said that “as the only ISP offering unlimited data, it ended up with all the vampires”.  Meaning tech savvy, high volume users switched to Telecom to take advantage of the uncapped data.

Telecom would have known that an uncapped data plan would result in some users consuming as much data as they could possibly suck down.  The infamous ‘Go Large’ plan from 2006 would have taught them that.

Go Large promised, “unlimited data usage and all the internet you can handle” and “maximum speed internet”.  It did not deliver this and in 2009, the Commerce Commission brought a prosecution that resulted in a $500,000 fine under the Fair Trading Act.

The practice of discussing maximum speeds possible for a broadband technology still occurs but is now not so prevalent.  Witness news reports touting Vodafone’s new 4G mobile broadband service as reaching speeds of up to 100Mbps.

That speed is hypothetical and simply will not be realised in everyday use.

The Telecom example now ensures that hypothetical maximum speeds are not mentioned in contracts or advertsing.

Which is in part what the proposed Product Disclosure Code is about – providing telecommunications Retail Service Providers with minimum standards for the disclosure of information about Broadband Plans.

There are six principles to the code.

First is about making product information clear, readable and easy to find and understand.

Second is that an appropriate level of detailed information is available to consumers at the right point in time.

Third is to use clear, standardised terms and language to allow for easy comparisons.

Number four is that plan information be kept up to date.

Next is providing consumers with accurate and reasonable assessments of how Broadband Plans are priced, will perform, and the technology used.

Finally is transparency around Broadband Plan features and price, including any restrictions.

The first four principles are all fine and proper and will enable Consumers to make easier comparisons between different offers.

The last two is where the issues become grey.

For example, the code does not require telcos to be consistent in what news articles say compared to what their advertising and contracts detail.

The main issue is that RSPs should give the bottom line of their service’s performance.  Not the top line, or even an average line.  For example, the bottom line on Vodafone’s RBI service is that in times of high usage, speeds may drop to the design limit of 45kbps.

That’s around dial-up speed and had Telecom’s Go Large plan detailed that, the vampires would have been fewer and also blameless.

 

 

 

Guest Post: Data havens and the constitution

Guy Burgess is “a New Zealand lawyer, software developer, consultant, CEO of LawFlow, and feijoa farmer” as well as a chatty fellow on Twitter. He’s written about our data haven concept from the all-important legal perspective.

TUANZ CEO Paul Brislen has written a thought-provoking article on the prospects of turning New Zealand into a data haven. There’s a lot going for the idea, but as Paul notes, there are a couple of stumbling blocks, one of which is the legal situation:

The final problem then, is the legal situation. We would need to become the neutral ground, the data Switzerland if we’re to gain their trust. Publicly adhered to rules regarding data collection and retention. Privacy built in, access only under the strictest conditions.

It would indeed require some law changes to become a “data Switzerland” where, as Paul envisages, “we treat bits as bits and that’s that”, and don’t allow the Armed Offenders Squad to swoop in with helicopters if someone uploads the latest series of Mad Men.

Exactly what those laws would be is a huge kettle of fish: privacy rights, intellectual property rights, safe-harbour provisions, search-and-seizure, criminal and civil procedure, etc. But putting aside the content of those laws (and their desirability), it is worth noting that New Zealand is in a somewhat disadvantageous situation in one respect vis-a-vis most other countries. Whilst New Zealand ranks as one of the most politically stable, corruption-free, and rule-of-law-abiding countries – ideal attributes for a data haven – we are in the very rare category of countries that are both:

  • Unicameral, unlike Australia, the UK, the US, Canada, most of the EU, Japan, India, and others; and
  • More importantly, have no written constitution that entrenches rights, limits Government power, and can strike down non-compliant laws. Only a handful of countries (notably including the UK) are in this category (and this is putting aside Treaty of Waitangi complications).

By my quick reckoning, the only other country with both of the above attributes is Israel.

What this means for us, as Sir Geoffrey Palmer wrote many years ago, is that whoever is the current Government of the day has unbridled power. Theoretically, there are little if any limits on what can be passed into law – all it takes is a 1-vote majority in the House of Representatives. This includes major constitutional change and retrospective law. For example, in the past decade-and-a-bit we have seen a Government change New Zealand’s highest Court from the Privy Council to a new domestic Supreme Court on anarrow majority, and retrospectively amend the law (also on a slim majority) to keep a Minister in Parliament – both things that may may well have faced constitutional challenge in other countries, but here were able to be effected with the same legislative ease as amending the Dog Control Act.

What’s this got to do with becoming a data haven? Well, it means that we cannot give the highest level of assurance that a future Government won’t do certain things that might undermine our data haven credentials.

For example, being a true data haven would presumably mean strong freedom of speech laws. You would want a reasonable assurance that a data centre would not be forced to hand over or delete data due to hate speech laws (present or future), except perhaps in the very strongest cases. New Zealand does have its peculiar Bill of Rights Act covering matters such as free speech, but this does not limit parliamentary power – in fact, Parliament regularly tramples various provisions of the Bill of Rights Act, with the only requirement for doing so being that the Attorney-General must inform the house. Nor does it prevail over inconsistent Acts: if another Act removes or abrogates a right, then the Bill of Rights Act doesn’t change that. So Parliament could potentially pass a law, on the slimmest of margins, that limits freedom of speech. This is not as far-fetched as one might think in an “open and free” democracy: the process is well advanced in the UK, where people face arrest and criminal prosecution for making statements considered by the authorities to be “insulting” (such as calling a police horse “gay”). Could this extend to limiting free speech (or content) hosted in data centres? There is nothing that says it can’t, or won’t.

Compare this with the US, where most of the internet’s infrastructure, governance and data centres are located. The federal Consitution provides the highest protection possible against Government limitation of free speech. Now this obviously does not (and is not intended to) stop situations like a US federal agency shutting down Megaupload and seizing data, in that case partly on the basis of alleged intellectual property infringement. But at least the limits on what the US Government can do are constitutionally defined and proscribed.

This issue is obviously much broader than data centres, but it does highlight the question: is it acceptable, in the information age, for there to be no effective limits on Government power over our information?

Guest Post: UFB for Dummies

Steve Biddle likes to describe himself as a former trolley boy but nobody believes him about that (it’s true, I swear) so we’ll just call him a network engineer with a passion for explaining things simply.

Steve posted this to his blog over on Geekzone but kindly allowed me to republish it here.

Unless you’ve been living on another planet you’ll be aware that New Zealand is currently in the process of deploying a nationwide Fibre To The Home (FTTH) network. This network is being supported by the New Zealand Government to the tune of roughly NZ$1.5 billion over the next 10 years and is being managed by Crown Fibre Holdings (CFH). Work is presently underway deploying fibre nationwide, with several thousand homes now connected to this new network.

Much has been made of UFB retail pricing, and for many individuals and businesses the price they will pay for a UFB fibre connection could be significantly cheaper than existing copper or fibre connections. What does need to be understood however is the differences between fibre connection types, and pricing structures for these different services. There have been a number of public discussions in recent months (including at Nethui in July) where a number of comments made by people show a level of ignorance, both at a business and technical level, of exactly how fibre services are delivered, dimensioned, and the actual costs of providing a service.

So why is UFB pricing significantly cheaper than some current fibre pricing? The answer is pretty simple – it’s all about the network architecture, bandwidth requirements and the Committed Information Rate (CIR). CIR is a figure representing the actual guaranteed bandwidth per customer, something we’ll a talk lot about later. First however, we need a quick lesson on network architecture.

Current large scale fibre networks from the likes of Chorus, FX Networks, Citylink and Vector (just to name a few) are typically all Point-to-Point networks. This means the physical fibre connection to the Optical Network Terminal (ONT) on your premises is a dedicated fibre optic cable connected directly back to a single fibre port in an aggregation switch. Point-to-point architecture is similar to existing copper phone networks throughout the world, where the copper pair running to your house is dedicated connection between your premises and the local cabinet or exchange, and is used only by you. Because the fibre is only used by a single customer the speed can be guaranteed and will typically be dimensioned for a fixed speed, ie if you pay for a 100Mbps connection your connection will be provisioned with a 100Mbps CIR and this speed will be achieved 24/7 over the physical fibre connection (but once it leaves the fibre access network it is of course up to your ISP to guarantee speeds). Speeds of up to 10 Gb/s can easily be delivered over a Point-to-Point fibre connection.

The core architecture of the UFB project is Gigabit Passive Optical Network (GPON). Rather than a fibre port in the Optical Line Terminal (OLT) being dedicated to a single customer, the single fibre from the port is split using a passive optical splitter so it’s capable of serving multiple customers . GPON architecture typically involves the use of 12, 24 or 32 way splitters between the OLT and the customers ONT on their premises. GPON delivers aggregate bandwidth of 2.488Gb/s downstream and 1.244 Gb/s upstream shared between all the customers who are connected to it. 24 way splitters will typically be used in New Zealand, meaning that 100Mbps downstream and 50Mbps upstream can be delivered uncontended to each customer. The difference is architecture is immediately clear – rather than the expensive cost of the fibre port having to be recovered by a single customer as is the case with a Point-to-Point network, the cost is now recovered from multiple customers. The real world result of this is an immediate drop in the wholesale port cost, meaning wholesale access can now be offered at significantly cheaper price points than is possible with a Point-to-Point architecture. GPON’s shared architecture also means that costs can be lowered even further since the architecture of a shared network means dedicated bandwidth isn’t required for every customer like is is with a Point-to-Point connection. The 2.488Gbps downstream and 1.244Gbps upstream capacity of the GPON network instantly becomes a shared resource meaning lower costs, but it can also mean a lower quality connection compared to a Point-to-Point fibre connection.

Now that we’ve covered the basics of architecture we now need to learn the basics of bandwidth dimensioning. Above we learnt that a CIR is a guaranteed amount of bandwidth available over a connection. Bandwidth that isn’t guaranteed is known as an Excess Information Rate (EIR). EIR is a term to describe traffic that is best effort, with no real  world guarantee of performance. The 30Mbps, 50Mbps or 100Mbps service bandwidth speeds referred to in UFB residential GPON pricing are all EIR figures, as is the norm with residential grade broadband services virtually everywhere in the world. There are is no guarantee that you will receive this EIR speed, or that the speed will not vary depending on the time of the day, or with network congestion caused by other users. With Voice Over Internet Protocol (VoIP) replacing analogue phone lines in the fibre world, guaranteed bandwidth needs to also be available to ensure that VoIP services can deliver a quality fixed line replacement. To deliver this UFB GPON residential plans also include a high priority CIR of between 2.5Mbps and 10Mbps which can be used by tagged traffic. In the real world this means that a residential GPON 100Mbps connection with a 10Mbps CIR would deliver an EIR of 100Mbps, and a guaranteed 10Mbps of bandwidth for the high priority CIR path.

Those of you paying attention would have noticed a new word in the paragraph above – tagged. If you understand very little about computer networking or the internet you probably just assume that the CIR applies to the EIR figure, and that you are guaranteed 10Mbps on your 100Mbps connection. This isn’t quite the case, as maintaining a CIR and delivering a guaranteed service for high priority applications such as voice can only be done by policing traffic classes either by 801.2p tags or VLAN’s The 802.1p standard defines 8 different classes of service ranging from 0 (lowest) to 7 (highest). For traffic to use the CIR rather than EIR bandwidth it needs to be tagged with a 802.1p value within the Ethernet header so the network knows what class the traffic belongs to. Traffic with the correct high priority 802.1p tag will travel along the high priority CIR path, and traffic that either isn’t tagged, or tagged with a value other than that specified value for the high priority path will travel along the low priority EIR path. Traffic in excess of the EIR is queued, and traffic tagged with a 802.1p high priority tag that is in excess of the CIR is discarded.

For those that aren’t technically savvy an analogy (which is similar but not entirely correct in every aspect) is to compare your connection to a motorway. Traffic volumes at different times of the day will result in varying speeds as all traffic on the motorway is best effort, in the same way EIR traffic is best effort. To deliver guaranteed throughput without delays a high priority lane exists on the motorway that delivers guaranteed speed 24/7 to those drivers who have specially marked vehicles that are permitted to use this lane.

There are probably some of you right now that are confused by the requirement for tagged traffic and two different traffic classes. The simple reality is that different Class of Service (CoS) traffic profiles are the best way to deliver a high quality end user experience and to guarantee Quality of Service (QoS) to sensitive traffic such as voice. Packet loss and jitter cause havoc for VoIP traffic, so dimensioning of a network to separate high and low priority traffic is quite simply best practice. Performance specifications exist for both traffic classes, with high priority traffic being subject to very low figures for frame delay, frame delay variation and frame loss.

UFB users on business plans also have a number of different plan options that differ quite considerably to residential plans. All plans have the ability to have Priority Code Point (PCP) transparency enabled or disabled. With PCP Transparency disabled, traffic is dimensioned based on the 802.1p tag value in the same way as residential connections are. With PCP Transparency enabled, all traffic, regardless of the 802.1p tag, will be regarded as high priority and your maximum speed will be your CIR rate. As the CIR on business plans can be upgraded right up to 100Mbps, GPON can deliver a service equivalent to the performance of a Point-to-Point fibre connection. Business users also have the option of opting for a CIR on their EIR (confused yet?). This means that a 100Mbps business connection can opt for a service bandwidth of 100Mbps featuring a 2.5Mbps high priority CIR, a 95Mbps low priority EIR, and a 2.5Mbps low priority CIR. This means that at any time 2.5Mbps will be the guaranteed CIR of the combined low priority traffic. The high priority CIR can be upgraded right up to 90Mbps, with such an offering delivering a 90Mbps high priority CIR, 7.5Mbps low priority EIR, and 2.5Mbps low priority CIR.

You’re now probably wondering about 802.p tagging of traffic. For upstream traffic this tagging can be done either by your router, or any network device or software application that supports this feature. Most VoIP hardware for example already comes preconfigured with 802.1p settings, however these will need to be configured with the required 802.1p value for the network. Downstream tagging of traffic introduces whole new set of challenges – while ISP’s can tag their own VoIP traffic for example, Skype traffic that may have travelled from the other side of the world is highly unlikely to contain a 802.1p tag that will place it in the high priority CIR path, so it will be treated as low priority EIR traffic. ISP’s aren’t going to necessarily have the ability to tag traffic as high priority unless it either originates within their network, or steps are taken to identify and tag specific external traffic, meaning that the uses of the CIR for downstream will be controlled by your ISP.

It is also worth noting that all of the speeds mentioned in this post refer only to the physical fibre connection. Once traffic leaves the handover point, known as an Ethernet Aggregation Switch (EAS) it’s up to the individual ISP to dimension backhaul and their own upstream bandwidth to support their users.

As part of their agreement with CFH, Chorus dropped their Point-to-Point fibre pricing in fibre existing areas in August 2011 to match UFB Point-to-Point pricing, which means customers currently in non UFB areas will pay exactly the same price for a Point-to-Point fibre access as they will do in a UFB area if they choose a Point-to-Point UFB connection. UFB GPON fibre plans won’t be available in existing fibre however areas until the GPON network has been deployed, either by Chorus or the LFC responsible for that area. In all UFB areas both GPON and Point-to-Point connections will ultimately be available.

I hope that this explains the architecture of the UFB network, and how connection bandwidth is dimensioned. It’s not necessarily a simple concept to grasp, but with the misinformation that exists I felt it was important to attempt to write something that can hopefully be understood by the average internet user. The varying plan options and pricing options means that end users have the option of choosing the most appropriate connection type to suit their needs, whether this be a high quality business plan with a high CIR, or a lower priced residential offering that will still deliver performance vastly superior to the ADSL2+ offerings most users have today.

And last but not least I have one thing to add before one or more troll(s) posts a comment saying fibre is a waste of time and complains about not getting it at their home for another 5 or 6 years. UFB is one of NZ’s largest ever infrastructure projects, and to quote the CFH website:

“The Government’s objective is to accelerate the roll-out of Ultra-Fast Broadband to 75 percent of New Zealanders over ten years, concentrating in the first six years on priority broadband users such as businesses, schools and health services, plus green field developments and certain tranches of residential areas (the UFB Objective).”

Residential is not the initial focus of the UFB rollout, and never has been. Good things take time.

GUEST POST: More on PBX security

Those of you with an attention span will remember we talked about PBX security a little while ago and Ben and I had quite a good discussion both in the comments and on Twitter about how important it all is.

Ben blogged on it and kindly allowed me to cross-post it here. Check out more from Ben at his blog.

A couple of weeks ago Paul Brislen posted a really good post on the TUANZ blog about PABX security. It seems some criminals had used a local companies phone system to route a huge number of international calls, leaving them with a colossal ($250k!) phone bill. These attacks are increasing common, and I have heard a number of similar stories.

Phone systems increasingly rely upon IP connectivity and often interface with other business processes, putting them in the domain of IT. But even if your PABX is from 1987 (mmm beige) and hasn’t been attacked yet, doesn’t mean it won’t be.

Both Telecom NZ and TelstraClear NZ have some good advice to start with, and you might find your PABX vendor can also give expert advice. Unfortunately many PABX systems are insecure from the factory, and a number of vendors don’t do a great job with security.

In a previous role I ended up managing several PABX systems spread across multiple sites, and learnt a few lessons along the way. Here are a few tips to get you started:

Have a single point of contact for phone issues – make it easier for users to change passwords, and get questions answered.
Educate your voicemail users, and work with them to use better passwords. Avoid common sequences like 0000, 1234 etc.

Document all the things! Make sure any company policies are documented and available (think about mobile phones etc too). Putting basic manuals on your intranet can really help new users.

Even if you outsource management of the phone system, make sure someone in your organization is responsible for it. And make sure this person gets more money!

Create calling restrictions, and put appropriate limits on where each line can call. If a line is only used for calls to local, national, and Australian numbers then that is all they should be able to call (don’t forget fax/alarm lines). Whatever you do, make absolutely sure that 111 (emergency services) works from all lines.

Standardise as many things as you can. Look at setting up system-wide call bars. Blocking 0900 numbers is a good start, and if no one will ever call Nigeria, it is a good idea to bar it. Make sure these settings are part of your standard config for new/moved sites.

Work with your vendor to ensure any root/master/service/vendor passwords are complex and unique. I have seen a vendor use the same service password everywhere, until a crafty hacker cracked it and then attacked many systems. Also talk to your vendor about a maintenance contract, and ensure they will install security updates in a timely manner. Restrict any remote service access where possible.

If you use auto attendants or phone menus, make sure they are secured too. Remove any option to dial through to an extension unless you are absolutely sure it is secure.

If you have multiple sites make sure that only appropriate calls can be routed between sites. Some phone hackers have been known to abuse site-site connections to work around restrictions.

If you have lots of sites, you may not always have control over the PABX, so work with your telco and have them restrict international calls as appropriate. Put this in your contract so it happens by default when you add/move sites.

If you have a mix of PABX systems/vendors at different sites, things can get very complicated and expensive, very quickly. Work on reducing the complexity.

Practice good IT security. Most PABX’s from the last 10+ years are Windows/Linux boxes (usually unpatched..) under the hood, and can be attacked over your network too (or used to attack your internal network!).

Ensure that both billing and system logging is enabled, and monitored. Otherwise a problem won’t be spotted until the next phone bill arrives.

The most important thing to take away is an awareness of the problem. Dealing with PABX’s can be complex. Don’t be afraid to get expert help. Your telco and PABX vendor are the best places to start. If you can’t get the support you need, change to one that will. If you have any advice, please add it below.

Guest post: Privacy and the Law

Guest post from Hayden Glass – Principal with the Sapere Research Group, one of Australasia’s largest expert consulting firms. Thanks to Rick Shera (@lawgeeknz) for instructive conversation.

Part 2

In Part 1 [link] we looked at some aspects of online
privacy. In this article we look at the law.

Can the old dog still
hunt

New Zealand’s privacy laws are generally considered to be
pretty sound. The Privacy Act began life in 1993 describing a set of principles
and giving you a bunch of rights in relation to controlling the collection, use
and disclosure of personal information.

 “Personal
information” is defined in the Act as “information about an
identifiable individual”, i.e., information from which you can be
identified. If an agency is collecting anonymous information about your
movements online, that is one thing, but if your online profile grows to the
point that you could be identified from it, the rules in the Privacy Act can
apply. As discussed in part 1, the line between anonymous and identifiable can
be pretty uncertain
.

The Law Commission looked at the Act in a three-year review
of privacy laws
that was completed in August 2011. It continues to believe
that self-protection is the best protection, but suggests a substantial set of
changes aimed at improving the law including:

* new powers for the Privacy Commissioner to act against
breaches of the Act without necessarily having received a complaint, and
allowing it to order those holding information to comply with the Act or submit
to an audit of their privacy rules, and

* measures to minimise the risk of misuse of unique
identifiers, and require those holding information to notify you if your
information is lost or hacked, and

* controls on sending information overseas.

The government agrees that it is time for substantial
changes to the Act, although it does not agree with everything the Law
Commission has proposed
.
A new draft Bill is expected next year.

To the ends of the
earth

One obvious issue in the internet age is the lack of match-up
between the international nature of internet services, and laws that are
limited to the borders of any particular nation. A modestly-sized nation at the
end of the world, like New Zealand, has limited ability to influence foreign
organisations who may not have any local presence, although our Privacy
Commissioner has taken action against reputable major players offering services
in this country.

One answer is to harmonise our laws with other countries, or
rely on the big fish to protect our privacy. If the US or the EU forces firms
to improve privacy protections we will benefit. The US Federal Trade Commission
can legitimately argue that its actions will protect users in other countries
(see the summary of a talk from Nethui 2012 here) and
it is focused on this stuff. Vivian Reding, then the
EU Justice Commissioner said that privacy for
European citizens “should apply independently of the area of the world in
which their data is being processed …. Any company operating in the EU market
or any online product that is targeted at EU consumers must comply with EU
rules”. The French data protection agency is investigating Google’s new privacy policy.

Another evident challenge to existing privacy law is to the
notion of “informed consent”. As a legal principle it is fine, i.e.,
your favourite online service has a privacy policy and you consent either
directly to it by checking the box and clicking “I accept” or implicitly
by using their service. So long as the policy does not breach the law and the
service follows their own policy, they are legally blameless.

In practice you likely haven’t read the policy, and you may
not be in a position to avoid surrendering some privacy in any case.
Participating in society increasingly requires online interaction, and any
online interaction will involve sharing some information. Legally operators can
rely on your click to indicate consent to their privacy policy, but in practice
you cannot really withhold it.

One solution could be crowd-sourced reviews of online
privacy policy, or organisations that rate others policies.
There are similar troubles with the terms of licensing agreements to which you
have to consent in order to use software.

Fit for purpose

Users have options to protect themselves online if they care
to. They can avoid being tracked, ensure their privacy settings for social
media services are well considered, disable cookies, turn off javascript, use
fake Gmail or Facebook accounts, use incognito modes on their browsers, access
the online world through a VPN or a range of other things. The Privacy Commissioner
has guidance also. And you either
have now or will soon also have an option to turn on a “do not track
option in your browser, that will
impede the ability of firms to piece together your internet history as you find
your own trail through the online garden.

Sadly users mostly do not avail themselves of these options.
That may be because some impede the internet experience a bit. Or because users
do not care to change their behaviour much despite saying they are worried
about online privacy.

In these circumstances, there will continue to be debate
about how far users can or should take responsibility for their own protection,
and how far the law needs to go. This battle is the natural result of the standard
model for internet services, i.e., if you want free internet services, you need
to realise that your eyeballs are the price. No one should be surprised that
advertisers try to make their services more effective by learning more about
the brains behind those eyeballs.

The Sum of All Our Fears – Privacy in the digital age

Our ideas about
privacy need redefining in the internet age

Hayden Glass is a Principal with the Sapere Research Group,
one of Australasia’s largest expert consulting firms. Thanks to Rick Shera
(@lawgeeknz) for instructive conversation.

I consider myself a fairly typical internet user. Google for
web search, a Gmail account for email, calendar and contacts, the Chrome
browser for surfing, and my Google drive for a whole host of documents stored
and shared in the cloud. On my Android phone I have 60 or so apps installed. I
have no Facebook account, but I am on Twitter. I use Dropbox to share files,
Flickr for my photos, iTunes for music, and Tumblr and WordPress for blogs.
Plus, like the rest of you, I use online banking, shop online, and get my news
nearly exclusively from online sources. I provide my location to make Google
maps work better and also to help get better search results, but I click
“Deny” when my phone gives me the choice to share location with any
particular website.

I am sharing, therefore, quite a lot of information on the
internet. This is an entirely standard way of life. Around 80% of us use the
internet
,
and 80% of users report using Facebook.

The internet is such a part of daily life that we now share
information unconsciously. Everything we do online creates a record and we
don’t think too much about what happens to it. In US academic Daniel Solove’s
vivid phrase, “data is the perspiration of the Information Age”. Others, like
American computer security specialist Bruce Schneier, think of your
click-stream as a type of pollution, in the sense
that it is created by doing some useful online task but it can have unpleasant side-effects
that need to be managed.

In Part 1 of this post we take a brief look at the online
privacy environment and what makes it different. In Part 2 we will look at
how laws are changing to adapt to it.

Part 1

Something new under
the sun

Problems of information privacy are much more difficult in
the internet age because the internet itself is so widely available, and
information flows on it are difficult to control.

The internet has no borders, and is not based in any
particular country. The location of service providers or users is generally
unimportant: information available in one place is available in all, and it is
difficult to control or trace the flow of data. Content is continually being
added or modified, but content is also persistent, i.e., information that was
once on a website can be searched for and retrieved even after the content of
the site has changed.

The internet is also tricky for governments to control.
There are, of course, still telecommunications operators who connect you to the
internet. They have extensive physical investments,  powerful brands and reputations to uphold. But
service providers who hold information about you are generally not dependent on
individual governments for resources at all. Most of the New Zealand internet’s
most popular services are provided by US firms based in California with servers
all over the world, and with little local presence here. The ability of the New
Zealand government to influence the activities of, say, Facebook is limited,
and given the aterritoriality of the internet, it is often not clear how firms
can navigate the thicket of different national responsibilities.

Privacy, of course, is also a non-internet problem. Those
holding information need to not, for example, lose sensitive government data in
the internal post
, or leave
their computer systems open for members of the public to access.

But often internet users do not realise how much they are
sharing (see these unfortunate Belgians), or what the consequences are.
Facebook stands accused of deliberately making it hard for users to control
their own privacy
,
and even the most sophisticated can get it wrong, releasing data that they
think is innocuous (like AOL or Netflix)  that turns
out not to be when combined with other public data. See also a local example.

Gold in them thar
hills

The major online services companies have also raised
substantial privacy concerns by mis-estimating what their users are happy with:
cue dismay when Mark Zuckerberg, Facebook CEO, said that his firm was built on
privacy expectations that all users might not share and the furore over changes to Facebook’s privacy settings that have led to EU
and FTC regulatory
intervention
, or when Google’s then CEO Eric Schmidt said that if you want to
keep something private online “maybe you shouldn’t be doing it in the
first place”
.

With all of this information about your online activities
able to be discovered, there is money to be made in sifting through it,tying it
together, and then selling the profiles to online advertisers.

Consider Rapleaf, a US outfit
that matches email addresses with a range of public data including Zip code,
age, income, property value, marital status and whether the person who controls
this email address has children. It claims to have data on over 80% of US email
addresses, and charges 0.5 cents per match.

Or this (registration required), a deal between Facebook and a firm called Datalogix
that allows the site to track whether ads seen on Facebook lead users to buy
those products in stores. Datalogix buys consumer loyalty data from retailers,
and matches email addresses in its database to email accounts used to set up
Facebook profiles.

Generalised concern

It is hardly surprising that people are concerned about
online privacy. Americans say their biggest perceived privacy threat is social
networking services like Facebook and Twitter (they are also worried about
unmanned drones, electronic banking, GPS/smartphone tracking and roadside
cameras
) (WARNING: PDF).

New Zealanders are worried too. A Law Commission survey revealed that 84% of respondents were concerned about “the security of
personal details on the internet”, more than were concerned about
“confidentiality of medical records” (78%) or “government
interception of telephone calls or email” (72%).

Expectations of privacy clearly depend a lot on context. Information
I share with my mother I may not wish to share with my friends (sorry guys),
and information I share with my friends I may wish to keep secret from a
potential employer. Information that I directly and intentionally share (e.g.,
via Twitter) is less sensitive than information that I do not know is being
collected. I would consider my browser history, my email and my search history
more sensitive than my purchase history from Amazon.com. I am pretty relaxed if
information about these things is used just to target online advertising. I am
less relaxed if these data were put together and used to establish my identity
or calculate my credibility and trustworthiness.

And since my list of privacy preferences will not be the
same as yours, it becomes clear that the question of online privacy is about
the limits of my ability to control the flow of information about me, and my
basic point here is that the internet age means that I have less control than
before.

If users are concerned about control but feel
(and to some extent are) powerless, what help does the law provide? We take up
that story in Part 2.