Net neutrality regulation of the internet? - Helmut-Schmidt

Transcrição

Net neutrality regulation of the internet? - Helmut-Schmidt
Int. J. Management and Network Economics, Vol. 2, No. 1, 2011
Net neutrality regulation of the internet?
Ulrike Berger-Kögler
Department of Economics,
Nürtingen-Geislingen University,
Neckarsteige 6-10, 72622 Nürtingen, Germany
E-mail: [email protected]
Jörn Kruse*
Department of Economics,
Helmut Schmidt University Hamburg,
Holstenhofweg 85, 22043 Hamburg, Germany
E-mail: [email protected]
*Corresponding author
Abstract: Strict net neutrality means that any data packet of any service should
be treated strictly equal, independent of origin, destination and type of service,
no matter what the economic value of congestion-free conveyance actually is.
There is a broad consensus that any blocking or retardation of data packets for
reasons of censorship or restraining substitutive services should be prohibited.
The problem then focuses on dealing with temporary internet overload. It
will be argued that the negative effect of congestion on quality varies
strongly among services. If user flat rates and net neutrality come
together, some quality-sensitive high value services might be crowded out by
quality-insensitive low value services, which is inefficient. The optimal
solution is the application of priority pricing, where higher prices are paid for
higher priorities in case of overload. It will be concluded that the European
approach, which relies on trusting in market forces combined with the soft
regulation of adequate transparency rules and a sufficient degree of
competition, will lead to an efficient outcome.
Keywords: internet; net neutrality; regulation; priority pricing; quality
sensitivity; quality of service; QoS; overprovisioning; two-sided markets.
Reference to this paper should be made as follows: Berger-Kögler, U. and
Kruse, J. (2011) ‘Net neutrality regulation of the internet?’, Int. J. Management
and Network Economics, Vol. 2, No. 1, pp.3–23.
Biographical notes: Ulrike Berger-Kögler received her degree in Economics
from Cologne University. Since 2009, she has been a Professor of Economics
at Nürtingen-Geislingen University. Her main fields are industrial economics,
regulation and deregulation of network industries. The focus of her recent
publications is on telecommunications, especially wholesale access, regional
markets, international roaming and net neutrality. She has more than 11 years
of practical experience in regulations as the Head of Regulatory Affairs for a
telecommunication company until 2009.
Jörn Kruse received his degree in Economics from the University of
Hamburg. He has been a Professor of Economics at the University of
Hamburg, at the University of Hohenheim, Stuttgart, and (since 1998) at the
Copyright © 2011 Inderscience Enterprises Ltd.
3
4
U. Berger-Kögler and J. Kruse
Helmut-Schmidt-Universität, Hamburg. His main fields of research include
competition policy, regulation and deregulation of monopolies and especially
network industries, telecommunications, internet, postal services, media and
political economy. He has been a government and industry Consultant in a
number of projects in Germany, Austria, Switzerland and the EU.
1
Introduction
The term ‘net neutrality’ denotes the principle that any data packet of any ‘service’
(which comprises any services, contents and applications conveyed over the internet)
should be treated equally, independent of origin, destination and type of service (TOS)
(e.g., e-mail, web browsing, VoIP/internet telephony, internet television, business
applications, online gaming, file sharing, video streaming, etc.) (Deutscher Bundestag,
2010; Schuett, 2010). It is widely accepted that traffic management to block illegal
content or to filter spam and malicious viruses is consistent with this understanding of net
neutrality (Ofcom, 2010; FCC, 2010). According to net neutrality proponents, equal
treatment of data packets shall also apply when increasing data traffic causes overload,
although the consequences of overload are vastly different among services. An alternative
approach could be to define net neutrality as the equal treatment of every sender paying
the same price per data packet for getting the same quality (priority) of transporting data
packets (Vogelsang, 2010).
In this paper, however, we will use the stricter definition of net neutrality for the
following reasons. First, strict net neutrality implies a zero pricing rule for priority (i.e.,
network operators cannot charge content providers for prioritising their data because all
data must be treated in the same way) (Schuett, 2010), which is very important in the
current debate (Ofcom, 2010). Second, the latter definition is inadequate for the
proponents of strict net neutrality (Van Schewick, 2010).
The discussions on whether to treat data packets strictly equally and whether market
forces alone will achieve an efficient outcome for internet users have already been taking
place in the USA for several years and have only recently gained prominence in Europe.
The FCC and the European Commission have recently published their approaches on
how to regulate the internet. The discussion and the resulting regulation were fostered by
some network operators who had either blocked content or wanted to charge content
providers for preferred delivery of their data packets (Kruse, 2010). These strategies are
completely different. Blocking and degrading data traffic (i.e., preventing or slowing
down the transmission of a specific legal service) independent of the actual level of
congestion is presumably illegal under competition law if it is an abuse of market power.
Charging for priority, on the other hand, may be an economically efficient way to allocate
scarce resources.
This is our starting point. We want to show how to deal with data congestion in an
efficient way. We bear in mind that internet usage has positive externalities (positive
network effects) for other users and for society, since it improves social and cultural
interaction and democratic discourse (Van Schewick, 2010; Frischmann, 2007). The next
section presents the consequences of internet overload, mainly quality deterioration for a
number of services. It differentiates between services that are different with respect to
quality sensitivity, economic value and data rate. Section 3 discusses overprovisioning as
Net neutrality regulation of the internet?
5
one of the strategies to avoid congestion. In Section 4, priority pricing and network
management are compared as instruments for rationing scarce capacity. Section 5
analyses the consequences of strict net neutrality and priority pricing (pay for priority) for
consumers, service providers, network operators and welfare. Section 6 critically presents
the main points of the FCC and EU regulatory approaches concerning the internet.
Section 7 concludes.
2
Internet overload, quality of service and crowding out
An important feature of the internet is that all the different services are transformed into
homogeneous data packets before being transported over the IP networks. They are
handled by universe protocols (TCP, IP) and sent over universal network infrastructures
by routers as switching devices. This avoids the requirement of service-specific
infrastructure investments, enabling the capacities to be used more efficiently.
The main problem of internet policy that is also at the core of the net neutrality debate
is how to deal with internet overload (Kruse, 2008). Overload occurs when the number of
data packets exceeds capacity. Internet capacity is defined as the maximum number of
data packets that can be conveyed in a very short time slot. Overload is caused by either
especially high usage peaks and/or the temporary breakdown of transmission lines or
other infrastructure elements such as underwater fibre cables due to natural disasters like
seaquakes or damage by ships, construction vehicles or terrorist attacks. Overload leads
to increased delay, jitter and packet-loss, which may significantly reduce the quality of
certain applications.
In case of overload, data packets are first stored temporarily in a buffer. This
increases latency (data delay) for a number of data packets. This latency encompasses the
time span that a data packet needs to travel from the sender to the recipient. If the data
overload becomes even greater, such that the buffer is no longer sufficient, then data
packets are lost (packet loss). Packet loss is defined as the relative number of data packets
lost during transmission from the sender to the recipient. Since individual data packets
(especially in the case of overload) at times take different paths for the same end-to-end
traffic, packets can potentially arrive at intervals that do not match the intervals at which
they were sent. Such fluctuations in delay (latency fluctuations) are called jitter.
Despite being homogenous at the data packets level (during transmission and
switching), internet services are totally different with respect to at least three criteria:
1
quality sensitivity
2
economic value
3
data rate.
2.1 Quality sensitivity
High quality sensitivity means that internet overload has a strong detrimental effect on
the quality of that specific service from the consumer’s viewpoint. In extreme cases,
consumers may not want to use the service at all. The reductions in quality due to
overload (delay, jitter, packet loss) differ greatly according to the service involved.
6
U. Berger-Kögler and J. Kruse
Quality sensitivity is often equivalent with ‘sensitivity to delays’ and in some cases also
with sensitivity to packet loss or jitter.
The qualities of some services are severely affected. These include interactive
services (e.g., voice over IP, where delays over 150 milliseconds are not considered
tolerable for consumers, and online gaming, where delays of 50 to 100 milliseconds are
already harmful) as well as many business applications and internet television. Other
services will not be affected at all or only moderately. These include services where lost
packets will be reordered from the source, such as e-mails, web browsing and, especially,
file sharing and other downloads.
2.2 Economic value (of congestion-free transport)
The ‘economic value’ in this paper is defined by users’ willingness to pay for instant
(which means congestion-free) conveyance as compared with a delayed transmission (or
with jitter or packet loss). Thus, the definition directly refers to the value of the
congestion-free transport of the data packets of a service. This has to be strictly
distinguished from the value of the service itself (user value), which is likely also
relevant for the willingness to pay. So the economic value of congestion-free transport,
hereinafter economic value, is a function of the quality sensitivity and the user value. The
economic value (of instant conveyance) is related to each single data packet, which is the
universal quantity unit of capacity usage in the internet. It varies significantly among
services. Many business applications and some interactive services have relatively high
economic value. In general, the data packets of file-sharing platforms, most downloads
and video streaming have low economic value. Consumers can be assumed to have a very
limited willingness to pay for the instancy of a music or video download, especially
because it consists of a very large number of data packets, and waiting for several
seconds (or even minutes) would not make any difference to them.
2.3 Data rate
Individual services have very different data rates, which can be measured by the number
of data packets per unit of consumption. A high data rate of a service basically means that
it requires a lot of internet capacity. While some services like e-mail, web browsing, etc.
have comparatively small data rates, other services produce high workloads for the
internet. These include in particular downloads, especially downloads via file-sharing
platforms, as well as video streaming. A large percentage of traffic consists of videos,
music and software.
Some years ago file-sharing platforms (like Napster, Morpheus, E-Donkey, etc.) were
responsible for about 50% of internet traffic in Germany (Schulze and Mochalski, 2009).
Since most of the earlier P2P content is now consumed via video streaming, the
percentage of file sharing alone is significantly lower now. Nonetheless, about half of
internet traffic continues to consist of file sharing or video streaming.
The high proportion of services with high data rates and a low willingness to pay for
is not particularly surprising from an economic point of view, since the marginal costs of
using these services are zero because of internet flat rates.
Crowding out. Due to the different services’ characteristics, there is a specific
detrimental pattern which leads to significant inefficiencies. Let us consider two types of
services, S1 and S2, that use the internet traffic capacity as a common resource. Service
Net neutrality regulation of the internet?
7
S1 is assumed to be a highly quality-sensitive service with high economic value.
Examples include interactive applications, such as VoIP, and a number of business
applications (credit card authorisation). By contrast, S2 is a high data rate, low value
service with low quality-sensitivity. P2P-filesharing and video streaming (YouTube) are
the most relevant examples.
In a situation of rivalry for internet capacity between the two services, the low-value
service S2 can oust the high-value service S1. This clearly inefficient process is termed
the ‘crowding out effect’ (Kruse, 2009). Although this effect has been formulated here in
a somewhat exaggerated manner, it highlights a relevant problem of the internet.
Considering the high volume of download and file-sharing traffic which is not
quality-sensitive, it can be anticipated that high-value services will be significantly
harmed and will potentially be driven out of the market. Additionally, innovative services
requiring high quality standards may not be developed at all, even if they have high
economic value.
3
Overprovisioning
Adopting a long-term view, it would in principle be possible to avoid most of the
overload problems if network operators invested in capacity enlargement. One might
consider very large capacities such that any demand peak could be met immediately at
any time. Sizing capacities to a potential maximum peak load is referred to as
‘overprovisioning’. It requires high reserve capacities and incurs correspondingly high
costs for network operators. This raises the question of whether overprovisioning would
be economically efficient.
Figure 1
Optimum internet capacity and the value of priority pricing (see online version
for colours)
The occurrence of overload and its quantitative effects on service quality reduction
certainly depend on the capacity of the internet infrastructure. It is assumed here that
there is no prioritisation. This assumption will be relaxed later. If capacity is varied, we
8
U. Berger-Kögler and J. Kruse
derive the long-term total utility function LU1(Y) in Figure 1. The utility increases with
growing capacity and reaches its maximum for the first time at YM, where no overload
occurs with the demand, which is assumed to be the maximum relevant demand. If the
capacity is further increased (overprovisioning), non-rivalry in infrastructure use prevails
throughout and LU(Y) remains constant. Differentiation of the utility function LU(Y) to
capacity leads to the long-term marginal utility curve LMU(Y). It therefore shows the
additional utility of an extra capacity unit. This marginal utility is positive (although
decreasing) until capacity YM is reached, where it is equal to zero.
Increasing infrastructure capacity also incurs additional costs. For the sake of
simplicity, the long-term marginal costs LMC(Y) of an additional capacity unit are
assumed to be constant (although this is not essential). Thus, the LMC(Y) curve runs
horizontally throughout.
The point of intersection of the marginal utility curve LMU(Y) and the marginal costs
curve LMC(Y) determines the optimum capacity Yopt. Up to this point the costs of an
additional capacity unit are lower than the additional utility. To the right of this point, the
additional consumption of resources is higher than the additional utility.
Since it can be assumed that the costs of expanding capacity are positive throughout,
optimum capacity for the economy is generally smaller than the capacity that results in
maximum utility for infrastructure users, i.e., complete freedom from overload. Thus, in a
state of optimised welfare, utilisation rivalries and overload externalities still exist at
certain peak times (and possibly in cases of network failures). Since YM > Yopt holds,
overprovisioning is inefficient. This has been developed earlier (Kruse, 2010) in more
detail.
The network operators would not have incentives to invest in additional infrastructure
if their outlays could not be amortised. As opposed to most other markets, additional
internet capacity would not lead to additional revenues because of the flat rates.
Furthermore, the heavy users who would use most of the incremental capacity are not the
network operators’ preferred customers under the net neutrality regime.
4
Services priorities
Taking the very different quality sensitivities of different services into account, the
internet congestion problem can be seen as merely a problem of adequate prioritisation of
data packets in times of congestion. It requires priorities such that data packets of
quality-sensitive, high-value services will be conveyed instantly, while data packets of
non-quality-sensitive, low-value services may potentially have to wait and will only be
forwarded with some delay or will have to be substituted by the service protocol later on.
Internet congestion periods (time slots) are often very short. They may only endure
for seconds, after which router and line capacities may be available again. If data packets
of non-quality-sensitive services are withheld during those short intervals, there may be
no quality reductions at all. If these packets waited until router capacity is available
again, they would not cause any congestion problems for other services, and their specific
short-run marginal cost would be zero. Under these conditions, it would be economically
inefficient to potentially exclude these packets from transport over the internet by a price
per data packet (after a potential departure from flat rates, as will be considered in
Section 5.1). If congestion periods were much longer, problems would worsen, since the
delay of some data packets would not solve the problem without quality deterioration.
Net neutrality regulation of the internet?
9
Technically, the internet infrastructure (routers) already provides for the introduction
of packet prioritisation. The headers of the data packets will contain specific priority
information – priority flags, or in technical terms TOS bytes – which will be used by the
routers to give differentiated priorities. A priority may be designated to the data packets
of specific services by network management (Section 4.3) or by priority pricing.
4.1 Priority pricing
Priority pricing (Telson, 1975; Chao and Wilson, 1987, 1990; Kruse and Berger, 1998)
denotes a pricing scheme in which the right to receive priority service is available to any
customer in exchange for higher prices. This may consist of just two priority classes
where the premium service pays a positive price while the best-effort service has a
priority price of zero. It may alternatively include a higher number of different priority
classes where each has a specific priority ranking. Customers (including private
consumers, commercial consumers, and service providers) are thereby given the ex ante
choice of opting for different levels of delay, jitter and packet loss (and thus different
transport qualities). Priority pricing is basically equivalent with the implementation of a
‘quality of service’ (QoS) regime (Brenner et al., 2007).
A high QoS is therefore synonymous with a high probability that the data packets will
arrive with no or minimum delay, jitter and packet loss. Customers can select among two
or more quality classes which differ with regard to priorities and prices. The willingness
to pay for high priority (high QoS) will depend mainly on the quality-sensitivity and the
user value of the service. This willingness to pay is equivalent to the economic value as
defined above.
Only providers and/or users of quality-sensitive services will have any reason
whatsoever to pay for priority, since non-quality-sensitive services will not gain any
advantage from it. Providers of non-quality-sensitive services (file sharing, e-mailing,
web browsing) will be adequately served with the least preferential best-effort class and
will thus obtain internet service cheaply. Providers of quality-sensitive services will only
be willing to pay for a high priority of their data packets if the users of these services (or
indirectly the advertisers), for their part, are also willing to pay for the resulting quality
differences of these services. This means that, in general, primarily high-value services
with a high quality-sensitivity will opt for a higher priority. This does not foreclose social
and cultural interaction and democratic discourse via the internet, since this is possible in
a non-quality-sensitive way.
When overload occurs under a priority pricing regime, only those data packets for
which this overload causes the lowest disutility will be subject to delay, jitter and/or
packet loss. Thus, we can say that priority pricing results in the economically efficient
rationing of scarce router and line capacity according to the value of the services. Priority
pricing also guarantees that the above mentioned crowding out problem will not occur.
This requires that the capacity is large enough to deliver the data packets of high-value
services with a high quality sensitivity such that no quality deterioration occurs. Roughly
speaking, high prices for priorities and a large number of users paying for priority
indicate that capacity should be enlarged.
Let us consider a specific situation of traffic overload. Priority pricing ensures that the
quality-sensitive services will enjoy preferential treatment so that no quality problem will
occur. But most of the non-quality-sensitive services may also not be harmed as long as
10
U. Berger-Kögler and J. Kruse
capacity problems are brief enough in duration such that delays are acceptable for users
and packet losses can be repaired by a resending mechanism.
From a technical point of view, the specific sender of the individual data packets pays
for the internet transport. Economically speaking, however, the data transmission is
frequently requested by other internet users who benefit from it. This is particularly
apparent in web browsing when an internet surfer sends a few data packets to a server
and requests that the server sends back a large number of data packets because the user
feels it offers a benefit. The service provider generally also derives a benefit which may
come, for example, from click-based advertising revenue, from direct revenue from paid
content pages or from other benefits. Those ‘other benefits’ may include, for example, the
advantage for an academic of presenting his/her findings and papers on his/her
university’s website or the motivation of a group to present its political opinions to the
public.
Who ultimately pays for priority depends on the individual business models of the
providers of internet pages, content, services, applications, etc., which may all be rather
different. It includes the interests and motivation of private and non-commercial
providers of internet services (such as universities, agencies, associations, etc.). Just as an
internet service provider’s individual costs (costs from operating servers, creating and
maintaining content, costs of internet traffic, etc.) have to be covered via motives of this
kind and, where applicable, revenue models, the same will apply to the costs of premium
service costs when preferred. There are numerous conceivable business models (Brenner
et al., 2007).
The economic consequences of implementing a priority pricing regime can be
demonstrated in the capacity framework in Figure 1 in Section 3. Priority pricing changes
the earlier long-run utility function LU1(Y) into LU2(Y). Since priority pricing uses
scarce capacity more efficiently in overload situations, the same capacities now produce
more economic value. Compared with net neutrality, each value (for example, LUmax)
requires less capacity und saves investment capital and operating costs.
In a free market environment where the network operators have incentives to match
the specific needs of different customers, a differentiated QoS system is likely to emerge
with specified quality classes and different prices in order to adequately deal with
heterogeneous services which have different sensitivities with regard to bandwidth, delay,
jitter, and packet loss.
4.2 Evolving price structure
Under a priority pricing regime, network operators will generate more revenues from the
commercial providers and consumers of quality-sensitive services with a high user value.
If a sufficient level of competition between network operators exists, it will prevent
supernormal profits. Thus, because of higher revenues from premium services, user
prices for internet access for consumers with best effort service can be expected to be
lower under a priority pricing regime (QoS regime) than under strict net neutrality. This
will raise the internet penetration rate and create positive external effects for society. The
premium service revenues will also generate incentives for capacity investments for the
network operators.
One may also interpret different prices for internet usage as normal price
discrimination, hereinafter called price differentiation, as they do actually appear in many
markets. Price differentiation means that different consumer segments pay different
Net neutrality regulation of the internet?
11
prices for basically the same or a similar product (for example, different scientific
journals’ prices for libraries, individuals and students) or that relative price differences do
not correspond to the relative cost differences of the products.
In our case, price differentiation can be associated with product differentiation, since
priority service is different from best effort service and relative price differences need not
necessarily reflect relative cost differences. Under a cost allocation method based on the
usage of network elements, there are no cost differences on a euros per Mbit basis
between priority and best effort service, since both use exactly the same network
elements. This merely demonstrates that existing relative price differences (since priority
service is more expensive than best effort) need not reflect cost differences. We should
bear in mind that there are different cost allocation methods, which we cannot discuss in
this paper. Customers with preferences for high quality are willing to pay more to get the
desired high quality, while customers without preferences for high quality get lower
quality for a low price. Price and product differentiation make both groups of users better
off compared with a situation without differentiation (‘one size fits all scenario’).
High-quality users are better off because they get better quality, and low-quality users are
better off because they have to pay less. Furthermore, often there is a third group of users:
low quality users with a willingness to pay less than the price of the one size fits all
scenario. They can be better off if price differentiation lowers the price to a level they
find acceptable. Thus, price differentiation results in direct advantages for consumers. It
also raises suppliers’ total revenue (or is the crucial factor that makes cost coverage at all
possible). Thus, in our case, price differentiation (a network operator offering different
QoS classes and customers selecting their preferred option) leads to higher economic
welfare [Varian, (2004), pp.452–455; Vodafone, (2010), pp.30–33].
The conclusion is that without network neutrality regulation, a system of different
qualities with different prices is likely to evolve. In the internet literature, this business
strategy is called ‘access tiering’. With access tiering, the network operator offers
different quality classes with different parameters such as speed, reliability and priority
[Vodafone, (2010), p.40]. With sufficient competition, the best effort service will still be
available at an acceptable level of quality (Section 5.4) and will have a priority price of
zero.
Such a market-driven quality class model will generally result in an economically
efficient rationing of scarce capacity according to the economic value of the
congestion-free services and thus avoid the above-mentioned crowding out problems. If
government should decide in favour of network neutrality regulation, an economically
efficient QoS concept treating all users with the same willingness to pay equally could
not be implemented. Since such network neutrality regulation is economically inefficient,
it should certainly not be implemented.
4.3 Prioritisation by network management
The term network management is basically associated with several technical and
accounting functions to adequately operate and maintain the network (for example,
security, authorisation, etc.). But it may also be used as a method of prioritisation. In case
of congestion, the network operator may then follow a procedure whereby specific data
packets receive priority while others have to wait. If executed in an ideal manner,
quality-sensitive services will get priority so that they satisfy the quality needs of their
12
U. Berger-Kögler and J. Kruse
users. These include interactive services like VoIP, videotelephony and some other
services where delay and/or packet loss leads to significant quality deteriorations.
Several network operators are already said to manage their networks in order to make
some specific services (especially internet television and VoIP) work properly, which
would otherwise not always be the case (particularly during primetime) (European
Commission, 2011). A network management system would allow for several priority
levels (under the IPv4 protocol, six), but only two would be required if the percentage of
non-priority data packets (best effort) were large enough.
If the only alternative to network management were strict net neutrality (for example,
if priority pricing were declared illegal), network management should be based on
economic reasoning. Network management is principally a rational way to deal with
temporary scarcities of internet capacities, which is not the case under a net neutrality
regime.
There are concerns that the network operators may use their ‘power of prioritisation’
by implementing network management in order to discriminate against specific services,
content or suppliers. Discrimination in this context is an anti-competitive behaviour. It
means that some services, etc. will be treated worse than the actual congestion situation
would require, for example, by degrading services despite a lack of congestion at the
time.
Individual cases of such discrimination in earlier years (Madison River vs. Skype,
Comcast vs. BitTorrent; see Section 6) have fostered the discussion. From an economic
viewpoint, such discriminating behaviour should not be allowed in the future. The best
way to deal with these concerns is transparency. Any network operator should publish its
priority principles and report on its compliance.
A disadvantage of network management is that network operators would decide on
specific services while consumers and service providers would have no individual
influence. There would be no objective criteria (like the price someone is willing to pay)
for the adequate priority in the individual case. Another important disadvantage is that
network management would not create any extra revenues through premium services
which would make best effort services cheaper, raise the internet penetration rate and
create investment incentives.
5
Consequences for network operators, service providers and consumers
5.1 Network operators
Revenues in the broadband market suffer from two main problems. Firstly, under the
assumption of competitive fixed and mobile broadband access markets (see Section 5.4),
revenues will not increase even as data traffic and overload are increasing (see e.g.,
Ofcom, 2010). Competition prohibits network operators from raising the prices of their
flat rates and, due to the nature of a flat rate, revenues do not increase along with data
traffic. Secondly, the best effort flat rates do not fulfil the information function that prices
should deliver in the economy. The operator does not know the valuation of high-quality
traffic and capacity enlargement and does not have enough revenue to finance an
enlargement.
Priority pricing as explained in Chapter 4 reveals the value of the high-quality data
transport of a specific service. It has the potential to increase revenue in an appropriate
Net neutrality regulation of the internet?
13
manner according to the causation of capacity enlargement. According to economic
rationale, the additional revenue will be invested in enlarging capacity until marginal
revenue equals the marginal cost of doing so. In general, priority pricing increases
incentives to invest in capacity.
When thinking about priority pricing, an unregulated network operator will consider
the two-sidedness of the internet in his pricing strategy (Rochet and Tirole, 2006; Roson,
2005; Rysman, 2009). The value of being connected to the internet for the service
provider increases as the number of consumers he can reach increases. Either he is able to
charge consumers for using the content or – more likely – he gets payment from
advertisers, usually on a pay per click basis. The value of being connected to the internet
for the consumers increases as the number of services they can use increases. Each group
in a two-sided market, here the consumers and the service providers, may be charged a
price for having access to and using the platform. According to the economic literature,
optimal pricing of the two groups therefore depends on the willingness of each group to
pay (Eisenmann et al., 2006), on each group’s own price elasticity and on the cross-group
elasticities, which measure the intensity of the indirect (cross-side) network effects
(Evans and Schmalensee, 2007).
Let us put together price and product differentiation as well as pricing in a two-sided
market. Generally, price differentiation has an output expanding effect; in our case it
increases the number of customers. Thus, price differentiation creates a positive network
effect on the other side of the platform. In most cases, the efficient outcome can be
achieved through price differentiation on both sides of the market, because more users on
one side increase participation on the other side, which leads to a “virtuous feedback loop
of enhanced benefits” (Weyl, 2006). An additional advantage is that charging both sides
of the market renders information on the valuation of high-quality traffic and capacity
enlargement more precise, because both sides’ valuations are revealed.
If strict net neutrality is regulated, priority pricing is prohibited. Network operators
may depart from flat rates and use other pricing strategies. They may apply volume-based
tariffs to avoid congestion by reducing traffic volumes. With this pricing strategy,
consumers are forced to consider their usage, since marginal usage costs (private
marginal costs) are not zero as in the flat rate scenario. The main advantage is the
reduction of ‘overusage’ (usage that is worth less than cost).
Congestion mostly arises during prime time (evening peak periods) when marginal
costs are larger than zero. In off-peak periods, short-run marginal costs are equal to zero.
Following peak load pricing logic, prices have to be zero with usage below capacity.
Prices for usage in peak periods have to reflect the incurred costs. Such a peak load
pricing structure avoids pricing out low-value usage, as volume pricing does. Peak load
pricing may lead to the more efficient usage of capacity, in case some usage switches
from peak periods to off-peak periods (Friederiszick et al., 2011).
The unpredictability of internet usage and the heterogeneity at the service level (see
Chapter 4) are jeopardising the advantages of peak load pricing. Unpredictability has the
consequence that overload occurs in periods that are actually peak though considered
off-peak and priced accordingly. Down the road, low-value services could be excluded
during periods that are actually off-peak if they are considered to be peak and accordingly
priced. Heterogeneity may lead to the crowding out of higher value services if qualities in
peak load periods are too low (Kruse, 2010).
Additionally, marketing arguments object to the aforementioned pricing strategies.
Customers prefer flat rates (Skiera and Lambrecht, 2006; Volantis Systems Ltd., 2010). A
14
U. Berger-Kögler and J. Kruse
network operator who switches to a volume-based peak load pricing structure is likely to
incur competitive disadvantages. If network operators need more revenues for financing
infrastructure investments while maintaining flat rates, they will have to increase the
rates. An obvious disadvantage here is the pricing out of users with a lower willingness to
pay. To dampen this effect, the network operator has the possibility of imposing usage
caps while maintaining the flat rate pricing structure. He may offer different usage caps,
for example, 1 GB per month for a low price, 5 GB for a medium price and 10 GB for a
high price (AT Kearny, 2010). Moreover, he can offer different Mbit per second
speeds. Such offers will force consumers to consider their usage and lessen the
cross-subsidisation of heavy users by light users. A positive effect is that this pricing
structure reduces overall overload, but it has no impact on the congestion problems
during different periods of a day (Friederiszick et al., 2011). To deal with this problem,
the caps as quantity-based elements in pricing should be implemented only for peak load
periods. Another possibility is to offer flat rates for off-peak periods only and to charge
peak traffic based on volume. But considering the unpredictability of usage and the
efficient allocation of capacity, it is evident that this pricing strategy is inferior to priority
pricing.
5.2 Service providers
Assume that a strict net neutrality regulation was to be imposed and some
quality-sensitive services no longer worked properly. As the congestion-free conveyance
of these services’ data packets is especially valuable (the willingness to pay for priority
for these data packets is high), the suppliers of these services would look for alternatives
to the universal internet that meet their quality requirements. They could invest in
proprietary networks themselves (for example, Google, Facebook, Amazon and eBay
have autonomous systems; Friederiszick et al., 2011), pay network operators to invest in
proprietary networks outside of the universal internet or – most likely – subscribe to
proprietary capacities (capacity reservation). Network operators will have incentives to
offer high-quality proprietary services, so-called managed services, outside of the best
effort internet to providers of services and content.
Proprietary networks are basically service specific and do not exploit the efficiencies
of the universal internet. It would be almost impossible to use network capacities as
efficiently as it is in the general internet. For a given data traffic, higher network
investment would be necessary, leaving large capacities unused most of the time. From
an economic viewpoint, proprietary network capacities are inefficient compared to a
properly operated universal internet. Nonetheless, they would be a logical result of net
neutrality regulation.
As a consequence, proprietary network solutions would be more expensive for the
providers of specific services because the necessary capacities for a high quality standard
could not be used for best effort traffic as with the universal internet. Proprietary
networks would also be particularly disadvantageous for the users of the remaining
universal internet, especially for best effort users. If revenues from premium services
were not available, the other best effort services would have to pay higher prices to cover
costs.
Another possibility for large providers of static content is to pay content
delivery network (CDN) providers to deliver their content with a higher quality. CDNs,
offered by Akamai and others, are able to deliver static content faster by distributing
Net neutrality regulation of the internet?
15
the content to many servers that are located closer to the end customers (URL 4,
http://www.hostway.de; Friederiszick et al., 2011). However, we have to consider that
most CDNs circumvent the effects of congestion in the core network only, and not in
backhaul or in access networks (the latter is relevant for wireless networks). Because of
this and because of their static nature, CDNs are only partial substitutes for priority
service from the network operators and proprietary networks (Ofcom, 2010).
It seems clear that proprietary solutions would create significant advantages for larger
service providers with larger financial power over smaller ones and for incumbent service
providers against newcomers. If best effort does not allow quality-sensitive services to
work properly, small start-up providers with insufficient financial power would not be
able to enter the market for quality-sensitive services in a scenario with strict net
neutrality since CDNs are just partial substitutes for proprietary networks and capacities.
Thus, strict net neutrality hampers the quality-sensitive innovations of start-ups and
forecloses the market for such providers. As a consequence, incumbent service providers
benefit from strict net neutrality because they are better off if competition from new
entrants is harmed. This would explain why many incumbent content providers are
proponents of strict network neutrality (Sidak, 2006). The situation for small start-ups
offering quality-sensitive content would be improved under a regime of priority pricing
because it can be assumed that even start-ups would be able to pay a smaller amount for
the priority delivery of their data compared to the financial resources needed for
proprietary networks. Even a provider offering content that is not quality-sensitive will be
better off with the priority pricing scenario. In the short-run, it does not matter to him
whether he acts under a regime of strict net neutrality or priority pricing because best
effort is sufficient for his content to work properly. In the long-run, he will be better off
with priority pricing because prices for best effort will be lower (see 4.2).
5.3 Consumers
Product differentiation allows the seller to offer a greater variety of quality classes of his
product. Heterogeneous consumers benefit from product differentiation because it is
more likely to produce a quality and a cost/performance ratio closely suited to their
individual preferences compared to the ‘one size fits all’ scenario (see Section 4.2).
Concerning the impact of strict net neutrality and ‘pay for priority’ as an instrument of
product and price differentiation on different consumers in the short and long-run, the
same considerations that were valid for the service providers are also valid here.
Quality-sensitive consumers are better off choosing higher quality for a higher price.
Non-quality-sensitive consumers are better off because of lower prices for best effort.
Considering the two-sidedness of the internet, the entire group of consumers may
benefit on average from lower prices. Assuming that service providers generate lower
cross-side network effects compared to consumers, service providers have to pay more
than consumers (Friederiszick et al., 2011; Eisenmann et al., 2006). Assuming that the
proportion of revenue earned from service providers will increase significantly in a
priority pricing scenario so that this side of the market faces higher prices on average, this
would generally lead to lower prices for the consumer side [for an estimation see AT
Kearny (2010)].
It should be considered that theoretical analysis of the two-sided market is not
conclusive. For example, Economides and Tag (2009) show that under certain
assumptions charging content providers decreases welfare. The lesson to be learnt
16
U. Berger-Kögler and J. Kruse
depends on how realistic these assumptions are. Their assumptions of inelastic consumer
demand for internet access and uniform termination charges for service providers are
evidently not realistic (Friederiszick et al., 2011). Thus, in our context, we take no further
account of their findings.
Depending on the extent of the price cut for best effort, the third group of
consumers mentioned in Section 4.2 with a willingness to pay less than the price in the
one-size-fits-all scenario is better off if price and product differentiation allow them to
buy the good. As the positive externalities of internet usage – allowing additional
consumers to use the internet creates social value by improving the democratic discourse
when more citizens are able to discuss via the internet and by increasing the value of
social networks when more individuals can participate, thus improving the level of
information in society – are higher than the private value for the consumer (Frischmann,,
2007), ‘pay for priority’ is more suitable for internalising positive externalities than strict
neutrality.
5.4 Further aspects to consider
5.4.1 Sufficient degree of competition as a basis for realising beneficial effects
After having presented the beneficial effects of priority pricing in terms of offering
different service classes to both sides of the market, we do not want to ignore the fact that
some economists have dissenting opinions. According to them, priority pricing reduces
investment in network capacity because the more congested the network is, the higher the
willingness to pay for priority is (Vodafone, 2010; for survey see Economides, 2010).
Some proponents of net neutrality go much further, pointing out the threat of artificially
degrading data traffic to create congestion (La Quadrature du Net, 2009).
This opinion is convincing in a monopolistic scenario. In cases where competition
exists, a network operator doing so will be competed out of the market. Those who pay
for priority will switch to a competing network operator offering a priority product at a
lower price. If those who want best effort for a priority price of zero are faced with
quality deterioration, they will switch to another network operator offering better best
effort quality. For the same reason, anti-competitive blocking and degrading (whether in
a direct manner via network management or in an indirect manner via prohibitive priority
prices) is not likely in a competitive scenario. Network operators would not want to risk
that specific services or content might not be available for consumers, or only available in
insufficient quality (European Commission, 2011). If a network operator blocks or
degrades certain services that are popular with subscribers, the value of network access
for subscribers will decrease. In a competitive scenario, subscribers will switch to another
network operator offering those attractive services that the first one has excluded. It is not
likely that a network operator will have incentives to block or degrade popular services.
The same logic can be applied to arguments concerning vertical foreclosure, although the
incentives for a vertically integrated network operator to block or to degrade rival
services may be stronger than for a network operator who does not offer his own services.
To sum up, the significance of blocking and degrading services and non-investment
in capacity depends on the level of competition in the broadband market. If customers are
not able to remedy inadequate offers by switching to a network operator offering better
value for money, an efficient outcome is endangered. The level of competition is the
crucial factor to be considered while deciding on the future regulation of the internet.
Net neutrality regulation of the internet?
17
5.4.2 Standardised service classes and priority flags
Priority flags, or in technical terms TOS bytes, specifies the quality class chosen by the
sender of the data. In a scenario with different QoS classes and competition between
network operators, entity A that is sending data packets has to make sure that the
‘following’ networks B, C, etc. will also comply with the original quality promise
(priority promise). Entity A will do this by handing over data packets only to those
networks that will guarantee to produce the desired quality.
To achieve this end to end quality across different interconnected networks, operators
should agree on the same service classes, e.g., delay: 100 to 200 ms, jitter < 30 ms, loss
< 1% (Brenner et al., 2007). Furthermore, standardised priority flags are required so that
each router in every network understands the information and is able to transfer the data
with the desired quality (Zarnekow et al., 2008; Jay and Plückebaum, 2008). Otherwise,
QoS would only be guaranteed on an intra-network basis.
6
Government regulation of the internet
6.1 FCC
In its recent report and order, the FCC has formulated three main rules to preserve the
internet as an open platform. These are the Transparency rule, the no blocking rule and
the non-discrimination rule.
For the sake of completeness, we wish to indicate that these rules partly differentiate
between fixed and mobile broadband access. The discussion on whether the differentiated
rules for fixed and mobile broadband providers are justified from an economic point of
view, depending on different levels of competition, is not part of this paper. The
transparency rule applies to fixed and mobile broadband providers. The no blocking rule
differentiates between fixed and mobile broadband providers. Fixed broadband providers
shall not block lawful content, applications, services or non-harmful devices. Mobile
broadband providers shall not block lawful websites or applications that compete with
their voice or video telephony services. The non-discrimination rule applies only to fixed
broadband providers (FCC, 2010).
As our findings in Chapter 5 show, the beneficial effects of market forces can only be
reaped in a competitive scenario. For well-functioning competition it is important, firstly,
that customers have the possibility to choose between a variety of competing offers and,
secondly, that they are able to make their choices based on relevant and clear information
(European Commission, 2011).
The transparency rule is important for customers that otherwise might not have
sufficient information about the network operator. It might be quite difficult for a number
of customers to gain enough information about the characteristics of their internet
service. It is likely that transaction costs would be prohibitive. As fixed and mobile
broadband providers must disclose the network management practices, performance
characteristics and terms and conditions of their broadband services, this is effective in
enhancing competition (FCC, 2010).
The rationale for the no blocking rule is to prevent damages to competition (FCC,
2010) (for example, between VoIP and fixed line telephone service) and harm to users’
participation in social, cultural or democratic discourse (Van Schewick, 2010;
18
U. Berger-Kögler and J. Kruse
Berger-Kögler and Kind, 2010) caused by the blocking of certain content or services. It is
disputable whether network operators have incentives for blocking (see Section 5.4), but
it is known that some vertically integrated network operators had discriminated against
services which competed with their own services.
In 2005, Madison River Communications (which is a telecommunications
network operator) was blocking ports used for VoIP applications (FCC, 2010; URL 5,
http://www.fcc.gov). T-Mobile has also blocked VoIP services, but this was done
transparently as part of the contract between T-Mobile and individual subscribers. Since
August 2009, a VoIP option can be booked for certain tariffs for an extra price of €9.95
(URL 6, http://www.zdnet.de/news/mobile_wirtschaft; URL 7, http://www.zdnet.de/
news/wirtschaft; URL 8, http://www.t-mobile.de). Comcast has also interrupted and
degraded the P2P traffic of BitTorrent (Ofcom, 2010).
Anti-competitive blocking is presumably illegal if it is an abuse of market power. If
the network does not have market power, legal and economic questions remain. It should
be legally and economically clarified whether competition law is strong enough to
prevent blocking or whether it is necessary to impose a specific regulation concerning
blocking.
According to the non-discrimination rule, fixed broadband providers shall not
unreasonably discriminate in transmitting lawful network traffic. To decide whether this
rule is justified, we must first determine what ‘unreasonable discrimination’ means.
According to the FCC, it is reasonable to discriminate between end users who choose
different quality classes, such as assured data rates and reliability as long as the offers are
transparent for all users. The FCC states that reasonable network management is allowed
because it does not constitute unreasonable discrimination. Reasonable network
management in the view of the FCC is application-agnostic discrimination. This means
that in case of congestion, a broadband network operator is allowed to provide more
bandwidth to low volume users than to heavy users. The usage refers to the preceding
period of time.
Unless the FCC clearly allows ‘access tiering’ as product and price differentiation for
the end customers (FCC, 2010), it does not want to be very precise concerning ‘access
tiering’ for the other side of the market, the service providers. But it seems that ‘access
tiering’ in a so-called ‘pay for priority’ system (which is synonymous with priority
pricing) for service providers is forbidden because it is considered an unreasonable
discrimination by the FCC (2010). To forbid priority pricing at least for one side of the
market contradicts economic rationale. Our analysis has shown that priority pricing has
beneficial effects. It improves welfare as it allocates scarce capacity efficiently and
increases incentives to invest in capacity enlargement.
It is difficult to understand the FCCs justifications, for example, barriers to market
entry of small service providers, reduced innovation and harm to non-commercial users
using the internet for socially desired objectives. In Chapter 5, we pointed out that the
reality is likely to be the opposite. It is rather surprising that the FCC wants to allow the
reaping of the benefits of access tiering on the consumer side but wants to forego these
benefits on the service provider side. Not to charge service providers for different quality
classes would only be adequate if consumers placed significantly more value on
additional content than service providers value access to consumers (Ofcom, 2010). Since
this cannot be assumed in general, it is unlikely that the FCC’s regulatory strategy
concerning priority pricing will lead to efficient market outcomes. This is supported by
economic literature (Sidak, 2006).
Net neutrality regulation of the internet?
19
6.2 European regulation
The EU Commission has a different point of view. It has more trust in competition and
reduces regulatory rules to a minimum. The two main points are:
1
explicit transparency measures should lead to better informed consumers who are
able to make informed choices (European Commission, 2009)
2
governments must have the ability to empower NRAs to set minimum QoS
requirements on public electronic communications network operators.
6.2.1 Transparency rules
Network operators are obliged to inform consumers about the most important features of
their contracts concerning internet access and usage. These include the minimum service
quality levels offered, measures of network management and its impact on service quality
and, if regulated by the national regulatory agency, any change limiting access and/or
usage (European Commission, 2009).
If this information is user-friendly and comparable, the consumer is able to choose
the network operator whose offer best matches his preferences. By being informed about
any change concerning access and usage, the customer may change network operators
if his preferences are no longer matched. But we have to recognise that many contracts
in the telecommunications industry have long contract periods. In Germany, most
network operators offer contract terms of 12 or 24 months (see for example, URL 1,
http://www.dsltarifinfo.de; URL 2, http://www.internettarifvergleich.net).
Thus, direct ‘punishing’ of an operator for quality deterioration may only be possible
via extraordinary contract cancellation rights. The use of this instrument may be difficult
for consumers, because it requires proving a QoS deterioration if the operator has not
given any information.
6.2.2 Minimum standards
As a fallback solution (Berger-Kögler and Kind, 2010) in cases of insufficient
competition, NRAs are entitled to define minimum standards for the QoS. This could be
put into practice if competition between network operators is not strong enough to secure
an acceptable best effort outcome and to prevent a worsening of service, an obstruction or
a slowdown of data traffic (European Commission, 2009). This requires NRAs to be able
to define adequate standards to protect customers against inefficient market outcomes.
According to EU regulation rules, access tiering is not forbidden, neither for the
consumer nor for the service provider, the latter being an essential difference to the FCC
rules. Another essential difference is that all kinds of network management are generally
allowed, but consumers have to be informed about it.
To sum up, the EU rules generally provide a reasonable framework for the internet
industry which allows market forces to work and gains benefits from pricing structures
which are allowed to practise product and price differentiation on both sides of the
market. As a fallback solution, it provides for protecting consumers against an inefficient
market outcome. But the devil is in the details, and it remains unclear how best to enable
the consumers to switch quickly in case of quality deterioration and how to set an
efficient minimum standard if it should be necessary.
20
U. Berger-Kögler and J. Kruse
6.3 Different regulatory approaches and competition in broadband markets
For a better understanding of the different approaches – trusting in market forces and
being afraid of potential threats – we should bear in mind that competition in the US
broadband market is structurally different from competition in European broadband
markets. In the USA, we mainly face facility-based competition between DSL providers
and cable providers (Elliott and Settles, 2010). In Europe competition is fostered by an
effective wholesale access regulation (European Commission, 2010). The authors of the
national broadband plan in the USA characterise their wholesale access regulation as a
hodgepodge which in particular hinders smaller carriers’ ability to compete (URL 3,
http://download.broadband.gov, p.47). Although America’s broadband plan recommends
that the FCC review its wholesale access regulation to enhance competition, this has not
come into effect yet (URL 3, http://download.broadband.gov, p.58). So there may be a
lack of disciplinatory competition in those areas of the USA in which a non-competitive
duopoly of a dominant cable and a dominant DSL provider exist. Given that the level of
competition in the EU broadband markets is higher than in the USA, the FCC’s distrust in
market forces can be understood. But to our mind the adequate strategy would be to
improve competition via effective wholesale access regulation rather than creating
inefficient internet regulation.
7
Conclusions
Internet overload is at the core of the net neutrality debate. Adequate prioritisation of the
right data packets is at the core of an efficient solution. Our analysis shows that the
optimal outcome will be reached by using market forces to grant priority. In the absence
of net neutrality regulation, network operators have strong incentives to introduce priority
pricing as a means of price and product differentiation for service providers as well as for
private and commercial consumers. This method of access tiering is an efficient way to
allocate scarce capacity according to the economic value of the congestion-free transport
of services. It avoids the inefficient crowding out of quality-sensitive services with high
user value by a non-quality-sensitive service with low user value. It can be expected that
priority pricing also leads to lower best effort prices. It may raise the internet penetration
rate and create positive externalities for society. Priority pricing will reveal the different
services’ preferences for high-quality delivery. It provides information for efficient
investments in internet infrastructure and increases the incentives to invest. But all these
beneficial effects can only be reaped if competition at the access and network layers
works properly.
If government were to regulate strict net neutrality, society would forego the benefits
of the market driven outcome. Although the FCC’s current regulatory approach
prescribes strict net neutrality for the service providers’ side of the market only and
allows quality differentiation on the consumers’ side, it is not very likely to result in an
efficient outcome. In contrast, the European Commission is focusing on transparency to
let market forces work. It considers minimum quality standards as a fallback solution in
case of insufficient competition. This can be seen as an adequate way of dealing with the
internet problems.
Net neutrality regulation of the internet?
21
References
AT Kearny (2010) ‘A viable future model of the internet’, p.30, available at
http://www.atkearney.de/content/veroeffentlichungen/whitepaper_detail.php/id/51295/
practice/telekomm (accessed on 02.03.2011).
Berger-Kögler, U. and Kind, B. (2010) ‘Netzneutralität – eine juristische und ökonomische
analyse’, in Netzwirtschaft & Recht, Beilage 4, pp.1–8.
Brenner, W., Dous, M., Zarnekow, R. and Kruse, J. (2007) ‘Qualität im internet’, Technische und
Wirtschaftliche Entwicklungsperspektiven, Studie, März, Universität St. Gallen.
Chao, H.P. and Wilson, R.B. (1987) ‘Priority-service: pricing, investment, and market
organization’, American Economic Review, Vol. 77, No. 5, pp.899–916.
Chao, H.P. and Wilson, R.B. (1990) ‘Optimal contract period for priority service’, Operations
Research, Vol. 38, pp.598–606.
Deutscher Bundestag (2010) Wissenschaftliche Dienste, Nr. 014/10, 05.03.2010.
Economides, N. (2010) ‘Why imposing new tolls on third-party content and applications threatens
innovation and will not improve broadband providers’ investment’, NET Institute, Working
Paper 10-01.
Economides, N. and Tag, J. (2009) ‘Net neutrality on the internet: a two-sided market analysis’,
available at
http://www.stern.nyu.edu/networks/Economides_Tag_Net_Neutrality.pdf#search=“Economid
es,+N.;+Tag,+J.+(2009),+Net+neutrality+on+the+internet:+A+two-sided+market+analysis
(accessed on 05.03.2011).
Eisenmann, T., Parker, G. and van Alstyne, M.W. (2006) ‘Strategies for two-sided markets’,
Harvard Business Review, pp.92–101.
Elliott, A. and Settles, C. (2010) ‘The state of broadband competition in America 2010’, p.5,
available at http://gigaom.files.wordpress.com/2010/04/pdf-broadband-competition-research
report-4-22-10-final.pdf (accessed on 20.04.2011).
European Commission (2009) ‘Directive 2002/22/EC on universal service and users’ right relating
to electronic communications network and services’, amended by Directive 2009/136, OJ
L337/11-36.
European Commission (2010) ‘SEC (2010) 627, Commission Staff Working Document’, Europe’s
Digital Competitiveness Report, Vol. 1.
European Commission (2011) ‘COM (2011) 222 final, Communication from the Commission to
the Council and the European Parliament: The Open Internet and Net Neutrality in Europe’.
Evans, D.S. and Schmalensee, R. (2007) ‘The industrial organization of markets with two-sided
platforms’, Competition Policy International, Spring, Vol. 3, No. 1, pp.151–179.
FCC (2010) Federal Communications Commission, Report and Order in the Matter of Preserving
the Open Internet, Broadband Industry Practices, FCC 10-201, Vol. 21, December,
Washington, DC.
Friederiszick, H.W., Katuzny, J., Kohnz, S. and Röller, L-H. (2011) ‘Assessment of a suitable
internet model for the near future’, ESMT White Paper No. WP-11-01.
Frischmann, B.M. (2007) ‘Infrastructure commons in economic perspective’, First Monday,
Vol. 12, No. 6, available at
http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1901/1783 (accessed
on 05.03.2011).
Jay, S. and Plückebaum, T. (2008) ‘Strategien zur realisierung von quality of service in IP-Netzen’,
WIK Diskussionsbeitrag Nr. 315, Bad Honnef.
Kruse, J. (2008) ‘Network neutrality and quality of service’, in Intereconomics, Review of
European Economic Policy, January/February, Vol. 43, No. 1, pp.25–30.
Kruse, J. (2009) ‘Crowding-out bei überlast im internet’, in Kruse, J. and Dewenter, R. (Hrsg.):
Wettbewerbsprobleme im Internet, Schriftenreihe des Hamburger Forum Medienökonomie,
Bd. 9, Baden-Baden (Nomos Verlag), pp.117–140.
22
U. Berger-Kögler and J. Kruse
Kruse, J. (2010) ‘Priority and internet quality’, in Falch, M. and Markendahl, J. (Eds.): Promoting
New Telecom Infrastructures. Markets, Policies and Pricing, pp.160–174, Edward Elgar,
Cheltenham and Northampton.
Kruse, J. and Berger, U.E. (1998) ‘Priority pricing und zeitkritische rationierung’, in Tietzel, M.
(Ed.): Ökonomische Theorie der Rationierung, pp.203–234, München, Vahlen.
La Quadrature du Net (2009) ‘Protecting net neutrality in Europe’, available at
http://www.laquadrature.net/files/LaQuadratureduNetDOSSIER_Protecting_Net_Neutrality_in_Europe.pdf+La+Quadrature+du+Net+(2009),+Prote
cting+net+neutrality+in+Europe&hl=de&gl=de&pid=bl&srcid=ADGEESjtclA2uORBukeP0_GPdun0XAYLw71VMMXw_yfErUTEPjoY0ZrbdnJ5397CXTotFCaTpirn5
p-XJnLMa_cGYiFIH6SIcJlMtbkvhikHbLDF1hP1qW1FhZ1LChirMS6EWQIhyP&sig=AHIEtbT0nuzgtk0i1
SPvyu5827fk7IzyDg (accessed on 05.03.2011).
Ofcom (2010) ‘Traffic management and ‘net neutrality’, A discussion document, available at
http://stakeholders.ofcom.org.uk/binaries/consultations/netneutrality/summary/netneutrality.pdf (accessed on 05.03.2011).
Rochet, J-C. and Tirole, J. (2006) ‘Two-sided markets: a progress report’, RAND Journal of
Economics, Autumn, Vol. 37, No. 3, pp.645–667.
Roson, R. (2005) ‘Two-sided markets: a tentative survey’, Review of Network Economics, June,
Vol. 4, No. 2, pp.142–160.
Rysman, M. (2009) ‘The economics of two-sided markets’, Journal of economics Perspectives,
Summer, Vol. 23, No. 3, pp.125–143.
Schuett, F. (2010) ‘Network neutrality: a survey of the economic literature’, in Review of
NetEconomics, Vol. 9, No. 2, Art. 1.
Schulze, H. and Mochalski, K. (2009) ‘Internet study 2008/2009’, Ipoque, available at
http://www.ipoque.com/resources/internet-studies.
Sidak, J.G. (2006) ‘A consumer welfare approach to network neutrality regulation of the internet’,
Journal of Competition Law and Economics, Vol. 2, No. 3, pp.349–474.
Skiera, B. and Lambrecht, A. (2006) ‘Flatrate versus pay-per-use pricing’, in Doeblin, S. and
Hess, T. (Eds.): Turbolenzen in der Telekommunikations- und Medienindustrie, Springer,
Berlin, Heidelberg, New York.
Telson, M.L. (1975) ‘The economics of alternative levels of reliability for electric power generation
systems’, The Bell Journal of Economics, Vol. 6, No. 2, pp.679–694.
URL 1, Available at http://www.dsltarifinfo.de/dsl/dsl-ohne-mindestvertragslaufzeit.html (accessed
on 02.03.2011).
URL 2, Available at http://www.internettarifvergleich.net/dsl-mindestvertragslaufzeiten-imvergleich.html (accessed on 02.03.2011).
URL 3, ‘Connecting America: the national broadband plan’, p.47, available at
http://download.broadband.gov/plan/national-broadband-plan.pdf (accessed on 20.04.2011).
URL 4, Available at http://www.hostway.de/managed-services/cdn/edge-caching/index.php
(accessed on 12.02.2011).
URL 5, Available at http://www.fcc.gov/eb/Orders/2005/DA-05-543A2.html
(accessed on 12.04.2011).
URL 6, Available at
http://www.zdnet.de/news/mobile_wirtschaft_skype_kritisiert_sperrung_seines_voip_clients_
durch_t_mobile_story-39002365-41002615-1.htm (accessed on 20.04.11).
URL 7, Available at
http://www.zdnet.de/news/wirtschaft_telekommunikation_skype_fordert_eu_unterstuetzung_
fuer_netzneutralitaet_story-39001023-41516640-1.htm (accessed on 20.04.11).
URL 8, Available at http://www.t-mobile.de/tarifoptionen/0,20406,17775-_1603,00.html (accessed
on 20.04.2011).
Net neutrality regulation of the internet?
23
Van Schewick, B. (2010) ‘Network neutrality: what a non-discrimination rule should look like’,
Public Law and legal theory research paper series research paper No. 1684677.
Varian, H. (2004) Grundzüge der Mikroökonomie, 6th ed., München.
Vodafone Policy Paper Series (2010) ‘The economics of the internet’, April, No. 11.
Vogelsang, I. (2010) ‘Die debatte um netzneutralität und quality of service’, in Klumpp, D.,
Kubicek, H., Roßnagel, A. and Schulz, W. (Eds.) (2010) Netzwelt – Wege, Werte, Wandel,
pp.5–14, Springer, Heidelberg, Dordrecht, London, New York.
Volantis Systems Ltd. (2010) ‘Mobile Internet Attitudes Report 2010’, available at
http://www.volantis.com/files/Mobile_Internet_Usage_2010.pdf (accessed on 05.03.2011).
Weyl, G.L. (2006) ‘The price theory of two-sided markets’, available at
http://economics.uchicago.edu/pdf/Weyl_020607.pdf (accessed on 02.03.2011).
Zarnekow, R., Brenner, W. and Dous, M. (2008) ‘Geschäftsmodelle für die umsetzung von quality
of service im breitband internet’, p.1063, available at
http://ibis.in.tum.de/mkwi08/16_IT_und_die_Medien-_Telco-_und_SoftwareIndustrie/03_Zarnekow.pdf (accessed on 01.03.2011).