Quantcast
Channel: Appledore Research
Viewing all 205 articles
Browse latest View live

Assuring Dynamic Services in the Hybrid Virtualized Network

0
0

— Patrick Kelly, Founder Appledore Research Group

The telecommunication industry is amid a major transformation both in terms of business models and technology. Virtualization, cloud computing, and 5G will be significant technology transformation drivers creating entirely new markets. Virtualization, and the automation it enables, is rapidly changing how we assure communication services, provide reliability and availability, optimize capacity and utilization, deliver a superior customer experience, and generate new sources of revenue and customer value. Programmable networks abstract the network intelligence into the software layer and therefore require a new management and operational paradigm to supporting advanced services.

RASA: An INNOVATIVE APPROACH TO ASSURING SERVICES

Appledore Research Group introduced a functional architecture that is designed to improve the operating efficiencies and service lifecycle management of cloud enabled services. Furthermore, we felt it was critical to re-imagine service assurance such that it powered efficient control loops, using existing best practices. We call this architecture Rapid Automated Service Assurance (RASA). It addresses the need to manage both the existing physical infrastructure and the virtualized network functions that will serve as the underlying resources to provide dynamic services in the digital economy.

It is our view that existing OSS systems — which are structured as dual software stacks implementing fulfillment and assurance — must merge to facilitate automated workflow processes. The cloud is on-demand, elastic, ubiquitous, and resilient. Figure 1 is a simplified architecture view that shows the interaction between both management operational functions and the ability to automate and close the loop.

Figure 1: Simplified Automation Architecture

Source: Appledore Research Group

 

To construct a model of the underlying network and resources, data must be acquired by a discovery process that is in a perpetual state of change. Dynamic changes to the resources and topology is maintained by inventory, which essentially supports other software functions in the assurance and orchestration domain. We like to think of the dynamic inventory and the live topology as a shared resource. Subsets of the dynamic inventory may exist in the RASA or orchestrators and this data must be federated to avoid unsynchronized data sets. The role of RASA is to process a “live” stream of user plane and control plane performance data, and compare active streams to a signature pattern for the service. After applying advanced analytics to the measured data, the system will isolate the service-impacting event and generate a notification, which can be automated via a policy driven logic. At this point the RASA system is providing the understanding which will drive corrective action (scaling, healing, moving) by the appropriate orchestration system / layer. Any reconfiguration, instantiation of VNFs or re-routing of network paths will be initiated at this point. The process will then be updated in the discovery to inventory to change in topology state. This activity occurs in very short cycle times to enable RASA to analyze and then trigger any future state changes completing the control loop.

MACHINE LEARNING

As networks become more complex and multi-domain, we believe that it will be increasingly difficult to anticipate all possible sources of impairment: consequently, we will need systems capable of learning them. The subject of machine learning has been applied successfully to other industries but has yet to receive the wide appeal in predicting future service impact in the telecommunication domain. Google has used it successfully in their mapping technology to offer suggestions of re-routing a driver as traffic patterns change based on accidents and weather. It is being applied to self-driving automobiles and in the health science field to provide better health care.

We think machine learning is beneficial in understanding the dynamic nature of virtual networks. Signatures have been used in the past to accelerate problem resolution. It assumes a base of data to begin the analysis. Currently we are looking at early attempts to use machine learning in a supervised learning mode to accelerate the understanding process in a dynamic virtualized infrastructure and then utilized this method to drive automation in the orchestration and policy domain layers.

The market for commercial service assurance solutions is maturing from the “customer experience management” era towards the era of “closed loop automation” (figure 2).

Figure 2: Evolution of Service Assurance Market

Source: Appledore Research Group

The closed loop automation era will be driven by the needs of truly dynamic cloud architectures, virtualization and the business requirements necessary to deliver on-demand services, proactive healing and dynamic sharing of costly infrastructure. We advocate that increased automation is the only way forward to support the demands of cloud services. And service assurance must be proactive if CSPs want to improve customer experience. Without automation, many of the economic benefits of virtualization will be foregone, and at the same time, the added complexity may well result in worse customer satisfaction, and a greater operational burden – meaning rising, not falling costs. It is critical then that CSPs design for automation and for scale, and reap both lower costs and greater flexibility.

I will be presenting at TM Live in Nice on May 15 at 4:20PM local time in the NFV and SDN track titled: Expert Insight: Automated Assurance for Cloud Services. Please join me for a deeper dive into this area and insights on how we see the market evolving.

For more information on this field of research please contact me directly at:
Patrick.kelly@appledorerg.com

The post Assuring Dynamic Services in the Hybrid Virtualized Network appeared first on Appledore Research.


Netcracker 12 and Virtualization – A focus on End-2-End business cases and agile operations

0
0

May 2017

Grant Lenahan, Appledore Research Group

If I might summarize the impression from Netcracker’s 2017 analyst day, positioning Netcracker 12, it would “let’s quit messing with technology and make and-2-end services profitable, then make the next move”. Fair enough.

Appledore Research Group attended the NEC/Netcracker 2017 Analyst Summit, in Cannes, France ahead of TMF Live! where Netcracker’s management team introduced Netcracker 12 and outlined its business strategy for managing virtual (and hybrid) networks. Netcracker is addressing virtualization and process modernization by focusing on specific services and their associated business cases, yielding a self-funding bootstrap model as well as well defined, unambiguous requirements.

This strategy shines a light on Netcracker 12’s strengths and philosophy. It is unabashedly a fully-integrated “full stack” in NetCracker’s own words. This shines an interesting light on the industry: this full stack approach is in response to market demand and desire to reduce deployment risk, and yet it also conflicts with many CSPs’ publicly stated desire to have open ecosystems, increase the use of open-source, and reduce their dependency on any specific vendor. And yet it addresses very real costs (and cost = time) in operations. Let’s look at the inherent strengths, weaknesses, opportunities and threats of Netcracker 12 or any fully integrated operations suite.

Here is the full report and our SWOT from Cannes, France 2017.

http://appledoreresearch.com/wp-content/uploads/2017/05/NetCracker-12-and-Virtualization-Report-from-Analyst-Days-May-2017.pdf

 

The post Netcracker 12 and Virtualization – A focus on End-2-End business cases and agile operations appeared first on Appledore Research.

NOKIA – Revolutionizing Evolution? Maybe, just maybe.

0
0

Does a huge NEP “get” cloud?  Appledore Research Group attended NOKIA’s full-day analyst event, scheduled coincident with TMForum Live! In Nice, France (May 15 2017).  Most of these events showcase incremental improvements in OSS, BSS and related software, within the traditional bounds of telecom.

This was different; especially if one listened closely.  NOKIA went out of its way to show that it is an innovator – a telecom giant that “gets” the transition to cloud, and the implicit importance of software.  Are they there yet?  Of course not.  Are they making the right moves? I think so.

You can read more details about the event here:

NOKIA Analyst Day 2017 Full Report for Download

Before you jump to that document, lets look at the key messages. First, Bhaskar Gorti, head of “Applications and Analytics”, which includes all NOKIA’s OSS-BSS-MANO portfolio, stated that his goal is to make NOKIA a true software company, not a hardware company with supporting software. This is a key pivot if one is to excel in a virtualized world.

Next, NOKIA outlined both the technology and the operational principles behind NOKIA’s cloud core, and it is directionally impressive, with principles that support “cloud native” and aspirations to transform ops with true automation and open-ness.

Finally, much of the remainder of the day concentrated on the application of machine learning and other advanced algorithms (with Bell Labs referenced repeatedly) to basic operational processes such as assurance, analytics, and customer support.

As they say, both God and Devil reside in the details.  But the day was both informative and encouraging.

Grant Lenahan
Partner and Principle Analyst
Appledore Research Group, LLC

The post NOKIA – Revolutionizing Evolution? Maybe, just maybe. appeared first on Appledore Research.

Nice reflections

0
0

This year was my first TMFLive since it was hosted in Dublin. In my absence the show seemed to have become smaller and less crowded. Surprisingly at the same time the show seemed to have become much more relevant to the needs of the telco. Whilst the show was billed as “Driving the Roadmap to Digital success” in reality the the digital services hype and the desire to compete with the likes of Google were long gone. In its stead there was a strong focus on the real challenges of running operationally efficient network businesses and how telcos collaborate and partner this network with others.

 “Open source isn’t like free candy – it’s like a free puppy!”
– Shahar Steiff PCCW

Opensource and in particular the ONAP project was high on everyone’s agenda at the show. However, the discussion has moved on from OpenSource as a possible ‘free’ magic bullet to how it can form a foundation for agile network businesses and promote iterative and rapid development of alternative standards within a community.

Similarly the real world complexity involved in the virtualisation of networks was being actively discussed; particularly how the benefits of virtualisation requires more than simply replicating physical network functions on more complex virtual infrastructure.

Finally the idea of automation and AI in the network seemed to have moved from the periphery of radio parameter SON to becoming the critical mainstream component of the network.

My sense was that the industry is now looking at addressing the right problems at the heart of their business. This reality was nicely summarised in the post show article in TMF Insight

Image: freeimages.com/renderman jim

The post Nice reflections appeared first on Appledore Research.

NOKIA FP4 Launch: Packet Processing can be about more than “speeds and feeds”

0
0
— Grant Lenahan, Appledore research Group
Diagram courtesy NOKIA

There’s a saying:

“You can forward packets, count them or classify them. Pick one.”

For as long as i can remember, there have been trade-offs between telemetry and router performance, and between DPI and router performance. The problem has often been “solved”, if we can call it that, by having tandem systems – one routing and one performing DPI or other tasks on a subset of the stream.

On Wednesday, NOKIA announced its new FP4 packet processor, which claims not only to increase speeds and density by more than twofold, but to do so while delivering DPI, DDOS protection, and rich telemetry, all at once.

So why was ARG, the self-proclaimed cloud management guys, covering a hardware launch and why am I blogging on hardware? Simple – chips such as FP4 will pave the way for more advanced management tasks. We can collect more telemetry data. We can collect that data more often. We can push complex masks to the chip and ask it to pattern match everything. We can do what has heretofore been possible, but uneconomic. We also presume that even if NOKIA is ahead of the industry (as they claim) that others will follow, and we may be seeing the beginning of a new age in analytics and more importantly, in actions.

NOKIA understands that. While about half the day was spent showing-and-telling the “sexy hardware” – (I am not making this up) roughly the other half was spent on what kinds of tasks it could enable. NOKIA showed examples and diagrams that will warm the hearts of any ARG reader – with nested, closed, automated loops operating between the various service routers, Deepfield analytics, and NOKIA’s NSP SDN solution. Many beneficial use cases were discussed, including:

  • more capable and complete DDOS blocking that is performed earlier – before it gums up routing
  • more insightful global optimization of paths and IP/MPLS traffic using rich telemetry, Deepfield analytics, and NSP
  • Optimization based even on external congestion or ingress congestion, e.g.: better peering point selection and allocation (using the usual suspects)
Diagram courtesy NOKIA

The message is powerful but subtle. From the management perspective, nothing shown was previously impossible – but it was impractical. High telemetry rates and widespread DPI just cost too much, either in money, lost throughput, or both.

What I found most interesting is how much NOKIA found it possible, and desirable to take what was a chip and router launch, and raise the level of discussion to topics of automation. This illustrates what has always been true, but is becoming more evident – the network and its management are two halves of a single whole. The network defines what is possible, the management (and control plane, which are merging) takes advantage of those possibilities. Abstract, expose, consume.

You can find additional summaries of FP4, the new router line, and Deepfield on appledoreresearch.com/research:

http://appledoreresearch.com/product-type/research-notes/

This is certainly the first in a series of similar announcements from leading players in the industry, and promises a future with fewer cost and practical limitations on the data we can collect from the network, and the complex filtering and forwarding rules we can throw its way.

-Grant

 

 

The post NOKIA FP4 Launch: Packet Processing can be about more than “speeds and feeds” appeared first on Appledore Research.

CENX selects a new CEO; reinforces focus on active assurance

0
0

CENX, an assurance supplier based in Ottawa, Canada, has announced the appointment of Ed Kennedy as its new CEO. Mr. Kennedy, until recently President of Tollgrade, comes to CENX with a background in carrier service assurance, cementing CENX’s recent focus on the transformation of assurance, especially within virtualized and distributed networks.

As readers of Appledore’s research likely know, CENX has a novel approach to service assurance, with a focus on (as we see it) two critical transformations: 1) automation of the assurance process and 2) simplification and correlation of **service** assurance in a multi-domain, multi-technology, multi-layer environment. Such environments will more and more be the norm, driven by virtualization (NFV, SDN, XaaS) and digital collaborations (with services made up of components from many enterprises, including carriers).

As a small, innovative supplier CENX has a common problem. Being small, they have a limited voice and outbound resources. Being innovative they need to change long-standing perceptions and the status quo. The two don’t mix well.

Ed is charged with a path to growth. CENX has recently announced a high profile carrier customer in Verizon, where they are effecting a unified view across domains, and supporting the closing of the automated assurance loop in some of those domains, according to both CENX and Verizon. According to CENX, Ed plans to grow CENX by a) focusing on those complex challenges for which they are most unique, and b) growing channels and partners, increasing their effective voice and feet on the street.

Appledore Research Group will be watching CENX’s progress closely, especially as they work to effect true insights and automation in the assurance of networks and services.

For more on CENX and their approach see our white paper, “Assuring Dynamic Services In the Hybrid Virtualized Network”, here:

Library of White Papers

The post CENX selects a new CEO; reinforces focus on active assurance appeared first on Appledore Research.

Mobile World Congress Americas: Back in Business and Down to Business

0
0

Appledore Research Group attended the inaugural Mobile World Congress Americas (MWC-A), held this year at San Francisco’s Moscone Center. The main event, MWC Barcelona, is well established as the industry’s premier venue for product announcements, but more importantly for critical networking and meetings amongst engineers and executives from across the globe. By contrast, CTIA has become a regional show with a focus on phones, gadgets, trucks and tools. Not much of interest for an analyst like me who covers the software and infrastructure necessary to transform our industry to highly automated virtualized networks.

MWC-A is looking up. The GSM association reported 21,000 attendees. That feels high, but the show does have a better and deeper feel. Exhibitors mostly filled both the north and south halls, but more importantly the mix of skills topics and technologies was much better. This year there were executives and experts willing and able to talk about 5G, small cell deployments, NFV, SDN, operations, management software in the end and agility of tomorrow’s networks. Nearly everyone I spoke to indicated that the quality of attendees in terms of knowledge and seniority was good.

So CTIA/ MWCA is back in business. But I also said it’s getting down to business. By this I refer to the tenor of the presentations roundtables in private meetings that I and my colleagues experienced. There was less talk of new technologies for their own sake and much more discussion of how they were being managed implemented and used to improve business.

Let’s discuss ONAP and associated open source projects. In Appledore’s area of focus, ONAP is the “elephant on the table”, but in our opinion, industry collaboration on characterization and modeling is a closely related beast. From discussions, some confidential and others public, we are seeing more participation and potential adoption of ONAP. Vodafone came out very publicly, but we also heard of others working more with ONAP and its associated SI vendors.

Similarly, we had discussions with diverse vendors about how to build better, more granular, more de-composed, and richer models of VNFs and services. This is the critical task toward more flexible orchestration, making possible higher levels of automation. Its far too complex to go into now, but the fact that we spoke to everyone from software vendors to CSP to startups to industry support organizations about related aspects of this common issue says the industry is beginning to address the complexity inherent in highly autonomous operations.

We’ll be blogging on several related topics in the upcoming weeks and months, and releasing several framework reports digging deep into this area – the first of that series available now, here:

2017 Market Summary: Critical Advances in Orchestration for Agility and Automation: Part 1 of 2

All the best, and maybe we’ll see you at The Hague for SDN World Congress.

Grant Lenahan

Partner and Principal Analyst

The post Mobile World Congress Americas: Back in Business and Down to Business appeared first on Appledore Research.

Two-factor Authentication —“tip of the digital services iceberg” for CSPs?

0
0

Today there was a press release that the “big four” US mobile operators are collaborating to provide a universal two-factor authentication system. And we applaud their effort. The real news here, though, is not technical, nor even limited to authentication – it goes to the heart of business opportunity and growth for CSPs in the digital collaboration (services) era. If they get it right, this is a terrific opportunity for the major mobile operators on several fronts:

  • It has major implications in terms of their customers’ security and user experience
  • If CSPs succeed, it will greatly enhance their reputations as trusted suppliers of secure services, and of security itself – a major opportunity in enterprise
  • Two-factor authentication plays to network operators’ “core competencies” including the existing hardware and software authentication that exists on SIM cards, and the complex tuples behind them
  • It may provide a stepping stone to the “value added” APIs (digital services) that CSPs so desire to provide, but have had only modest success in

In short, it can launch them into the digital service business they crave, and move them beyond “simple pipes”.   And its a realistic objective that builds on acknowledged strengths.

There is no question that today’s authentication environment is broken. The latest, and worst breach is Equifax, yet this is but a symptom of the real problem: passwords that are too complex (and too many) to be managed; and at the same time too simple to be secure. NIST recently recanted its long-standing recommendations – that all passwords have a mix of caps, small letters, numbers, special characters and be changed periodically. Why? They were hard for humans to remember but easy for computer to crack — so everyone had a “system” that made them even easier on the machine-enabled crooks.

Two factor authentication, based on mobile devices, solves all these problems. It avoids human memory limitations. It creates a transparent user experience. It raises the complexity. And it even provides a central location to data-mine for suspicious patterns.

CSPs are inherently in a great position to handle this task. They have everywhere (almost) connectivity, and least-common denominator methods (SMS). They have strong authentication, a clear identity of users, location data, and a strong culture of process orientation and security.

Too often I see CSPs looking to get beyond their traditional strengths. I have always argued that they simply need to understand what their strengths are, and then exploit them in innovative ways. This is such as case.

In closing I want to emphasize both the strategic fit, and the opportunity behind this relatively innocuous announcement. Technically and operationally, authentication is a uniquely good fit with CSPs – better than with the Webscale providers or banks – the logical competitors. In terms of future opportunity it changes the perception of CSPs from purveyors of pipes, to partners that can exploit a ubiquitous, sophisticated, costly and capable infrastructure. It can lead them into other relationships, as providers of charging, settlement, security, managed secure services and myriad other value added services, offered via digital ecosystems.

I urge the carriers to execute on this strategically, and potential; vendors and partners to consider how to improve and leverage this opportunity.

 

Grant Lenahan

Partner and Principal Analyst, Appledore Research Group, LCC

The post Two-factor Authentication — “tip of the digital services iceberg” for CSPs? appeared first on Appledore Research.


SDN World Congress: Making Automation Real

0
0

Automation was a key point of discussion at Layer 123 event in The Hague last week.

 

In some ways it reminds me of other terms the industry has used that means different things to different people. Remember “customer experience management” and “analytics”?   – What are we talking about specifically and where do we accrue the business benefits? One way to bring clarity to the topic of automation is to look at a framework for how it might work.

Closed-loop control systems have been applied to many industries. A closed loop consists of a controller and sensor. In essence, we measure inputs (network events) which acts as a sensor and compare these events to the desired state (service). Automation can’t exist without understanding and applying actions. The understanding is another area that the industry community is seeking to improve and advance.

This (generating true understanding) is where Appledore Research Group expect to see value creation. We have the compute power now to process large streams of data. The innovation will come from robust machine learning models. This is the ability to understand the state of the service and “tune” the feedback loop via the controller (orchestrator) in fast cycle times where human intervention either falls short or is unable to scale to the requirements of robust cloud services.

Currently, we are looking for the evidence where CSPs and suppliers are moving forward in a step function. I think it will materialize in nested loops at the domain level and for specific services. We already have evidence where SON and SDN networks have led the way. Where might we expect to see it in the NFV domain? More importantly how will the industry apply it across technology domains and geographical boundaries?

Patrick Kelly, Founder and Principle Analyst

our published Framework on automation and control theory is here:

Closed Loop Automation in Virtualized Networks

and our “sister” publication on how Service Assurance fits in is here:

Rapid Automated Service Assurance in the NFV and SDN Network

 

The post SDN World Congress: Making Automation Real appeared first on Appledore Research.

SDN World Congress: Culture and trust in The Hague

0
0

Appledore Research Group is just back from an excellent week at SDN NFV World Congress in The Hague (my first since the early years of this event in Bad Homburg). Whilst I have been away the conversation has changed. Gone is the early stage optimism of NFV/SDN changing everything and in its place there is a much more sober focus on key NFV use cases and the challenges of operationalising the technology.

These challenges were well described, in the session I chaired, by Alexey Gumirov of Detecon. Alexey nicely summed up the two key operational challenges to NFV/SDN in CSPs: culture and trust.

“Don’t care culture breaks everything”

Existing cultures in CSPs are siloed and create responsibility avoidance. If we don’t change the culture and the silos we are destined to replicate our existing physical network ways of working in the new systems (Conway’s Law)

“When I use AWS I trust it”

Many of the challenges in software become easier when we trust and work with our suppliers. Alexey noted we seem to implicitly trust our relation with AWS or with Kubernetes and by working with them the capability improves and is easy to consume. Yet the majority of existing CSP interactions with suppliers remains about shifting the responsibility to the vendor and creating the exact reverse of this.

Appledore will be looking at the opportunities and challenges of operationalising NFV/SDN in upcoming research.

The post SDN World Congress: Culture and trust in The Hague appeared first on Appledore Research.

Nokia Global Analyst Forum 2017: a focus on enabling transformation and automation

0
0

Appledore Research Group attended NOKIA’s annual Global Analyst Forum, held at the firm’s experience center in Espoo Finland. The event covered nearly two full days, and spanned the (extensive) scope of Nokia’s portfolio, from software to consumer products (via licensing and partners).

Appledore Research Group is primarily focused on the software, processes and business transformations essential to wring the most business benefit from virtualization (or a likely hybrid future); so I’ll concentrate on the topics closest to that theme.

My first observation is that Nokia has, to a significant degree, successfully integrated its pre-existing and Alcatel-Lucent products and resources. Today, not that long after the merger, there is little evidence of the two pre-existing firms, and all signs – both overt and subtle – indicate that a truly cooperative combination has, and is occurring. It is worth noting that many of the Alcatel-Lucent leaders, ways of working and strategic directions remain at the forefront – embraced as strengths rather than being rejected as ‘Foreign bodies”. This bodes well for the combined organization, which clearly values and draws on its rich diversity of strengths.

Marcus Weldon, President of bell Labs, set the tone with a set of fundamental technology and economic trends, which he believes, and we agree, will set the rules for the industry and define possible opportunities. Beside that, listening to Marcus is always mind expanding 🙂

Above:  Weldon: if we are not careful, flexibility drives costs up; automation brings them down.

Building on this theme, Bhaskar Gorti, president of “Applications and Analytics” (Nokia’s name for its consolidated OSS/BSS/Analytics/MANO portfolio) gave what I felt was a compelling presentation on his a) strategy and b) execution for that business group. Bhaskar’s essential strategy is to build a true software business, independent of hardware, at scale. To accomplish this he is investing in core assets and transformations that allow software to be built and designed efficiently, such that he can be cost effective, profitable as well as delivering agile products and solutions.

I apologize if I sound like a Dilbert cartoon, with that string of buzz-words. I want to emphasize that I was impressed. A&A is building a library of re-usable software components that will allow them to develop software products faster, with higher quality. The unspoken secret if they are successful is that their maintenance and regression testing costs could plummet. This is not rocket science in the software biz, but it has rarely been implemented successfully in large, staid telecom NEPs. According to Bhaskar’s data, they are about 30% along in the transformation of products to the new architecture; and will achieve 100% by the end of 2018. That folks, is astounding. I’ll probably ask for the fine print somewhere toward the end of next year, but its impressive nonetheless.

Component based software implements efficient “manufacturing” methods, applied to software. But anyone in the telecom OSS business (forgive my “old” term, its used to make the point) knows that software itself is only a small percentage of the total cost and complexity of delivery. The balance falls in a) selling the right thing (important! Not always done!) and in b) efficient delivery, integration, data migration, configuration and training.

Historically, there is vastly more money spent on these closely tied services around OSS than on the core software, and in new technology areas the trend is even more so. Moreover, this is where solutions can be “made or broken” – with an extensible architecture or one too focused on the initial service at hand and therefore insufficiently flexible. Nokia, according to Bhaskar and his lieutenants, is investing in automation that makes their delivery more efficient and repeatable, and also in the skills to help their SP clients implement and achieve automation. We look forward to evidence and success stories from the field, but this is the right approach.

 

[Graphic to be updated]: Elisa is applying Nokia sourced AI to achieve automation in incident reduction and remediation

There was a similar story from Basil Alwan and his extended team. There is a broad and aggressive story from his “IP and Optical” BU, but I’d like to focus on a few related topics (see also our research note on this topic, under the “research” tab): the new FP4 service router family, Network Services Platform (their SDN+ engine), and the Deepfield analytics software.   Core to ARG’s focus, Deepfield is becoming a core asset for all of these technical platforms (both hardware and software), delivering the deep network+service intelligence needed to make better, automated decisions. NSP, and SR-series routers then become the decision-making and execution engines to effect grooming, cleansing, path optimization and healing. The topics are far too deep and broad to be covered here, but IP/Optical is working toward a set of assets and process flows that allow networks to be much more flexible, agile (quick), efficient (use of capacity) and resilient (via smart, potentially mass, healing and grooming). These are the things of cost efficiency combined in a happy mix with potentially better resiliency and customer satisfaction.

Our opinion here at Appledore is that this matters greatly to the core of our industry – Service Providers. Efficiency with quality is the lifeblood of competitiveness, both in terms of margins, but also in retaining and winning back business from enterprises, OTTs and Webscale competitors. I hope what we were shown at a high level proves its mettle in deployment, and that SPs embrace the technology, process and mindset changes necessary to reap the full rewards of cloud technology and operations.

 

Grant

The post Nokia Global Analyst Forum 2017: a focus on enabling transformation and automation appeared first on Appledore Research.

Why loose coupling and independent domains matter

0
0

“End to end” thinking is a mantra in our industry. We have always felt need to understand the entire experience delivered by service, ideally as a customer sees it.

This much is true.

Too often however, and driven by (many) decades-old technology and practices, we mistakenly believe this means building monolithic network/service models, fulfillment processes, and assurance processes. This practice must end if we want true agility. Rather, it is imperative that we move to a very different conceptual model – one of loosely coupled, autonomous (and likely autonomic) domains.

Why? Consider three scenarios:

  1. Scenario #1: An SP is involved in a digital collaboration with a network of health providers, medical devices (“things”), and a medical technology company operating cloud based services. Each of these firms will manage it’s own “domain” and therefore they must be loosely coupled. In fact, service provider engineers may have little knowledge that they even exist.
  2. Scenario #2: Virtual technologies such as NFV, SDN and modern RANs must be self-managing (autonomic). Therefore, they too must be loosely coupled to any telco-wide E2E view and service model. Within each domain NFVs may be moving and scaling, SDN flows re-arranged, and RAN parameters dynamically optimized by SON. It is neither possible nor desirable to micro-manage them as part of individual services.
  3. Scenario #3: A new technology or administrative domain is added, or the management method for, say, NFV is dramatically upgraded. We want to have minimal impact on E2E processes and minimize the integration nightmare. So, once again, loose coupling to the rescue!

The common thread to all of these is that each domain maintains its own detailed view, domain specific algorithms and methods, and [highly dynamic] state, and abstracts and exposes a set of “service” characteristics to E2E processes, whether fulfillment, assurance, capacity expansion or other management tasks. Engineers accustomed to having detailed drill-down available need to get comfortable with a more abstracted view. On the flip side, there will be parallel teams (in other departments or other firms entirely) managing those domains in more detail.

Within those domains, processes will become incrementally and inevitably more automated. The volume of configuration changes and the complexity of data demand automation, and as well, automation can yield lower costs and proactive maintenance – a better customer experience. There’s a joke in the airline industry: tomorrow’s jet cockpit crew will consist of a pilot and a dog. The pilot’s job is to supervise the autopilot. The dog’s job is to bite the pilot if he touches anything J

Form our surveys of the industry this may be accepted by some in principal, but in execution we are far from adopting a true, multi-domain approach. The Appledore Research Group framework architectures and best practices are predicated on this need – the following is taken from an upcoming major report:

Several suppliers are advocating just such approaches. CENX has re-vectored its entire business to emphasize a federated domain approach to active service assurance. NOKIA is applying these concepts in part to rationalize and modularize its huge and growing portfolio of OSS capabilities – from SDN to SON to MANO to … Netcracker’s inventory vision is one of multiple, loosely coupled, autonomic domains abstracted to an E2E view. So far, our service provider deployment data does not indicate widespread adoption and deployment of this approach, but we expect – and hope – that “next step” is coming.

For more information on this topic we refer you to our published research on closed-loop automation, automated service assurance, dynamic inventory, and our upcoming research on best practices in orchestration, modeling and onboarding, all available (with subscription, or individually) here at www.appeldoreresearch.com/research.

Enjoy!

Grant

The post Why loose coupling and independent domains matter appeared first on Appledore Research.

The “MANO+” Market: a Work in Progress: Major Survey by Appledore Research Group

0
0

Appledore Research Group has just published Part 2 of our major survey of the “MANO+” market.  This looks at the major needs and trends in orchestration, necessary to achieve hands-off, flexible automation.  We believe that the four areas are:

1. Model- and Policy-Driven Orchestration (truly model native)

2. Sophisticated On-Boarding and Modelling, including DevOps artifacts

3. Rapid, Automated Service Assurance to drive continuous orchestration

4. Dynamic, distributed and autonomous inventory

This framework research is available under subscription or for purchase:

Here

The post The “MANO+” Market: a Work in Progress: Major Survey by Appledore Research Group appeared first on Appledore Research.

Winning at the edge

0
0

There have been a number of 5G trial announcements in the last few months that show exciting progress in terms of increased network speed and lower latency. What all of these trials have in common, is that they are focused at the radio technology aspects of 5G.

freeimages.com/Ned Horton

Yet radio improvements alone are not sufficient to change the inherent value of a CSP network to its customers. The improvements in radio (particularly latency) are rapidly lost in the overall performance of the connectivity between customer equipment and the central cloud.

For CSPs to gain added value from 5G they need applications that cannot simply be delivered by connections to applications hosted in centralized data centers or the cloud. This will require computing power to be near the edge of the 5G network, hosting innovative applications that utilize the local increased bandwidth and lower latency.

Appledore Research has just published research looking at Edge Computing and how this can differentiate CSPs, beyond network connectivity. In the research, we look at the related topics of edge computing, white box CPE and improved radio access technology in 5G. We assess the current direction and economics of 5G and network slicing; the current direction and economics of universal CPE in CSPs and some of the existing standard and ecosystem initiatives in edge computing.

The report is recommended for CSPs and their suppliers looking at the adoption of 5G and edge computing.

The post Winning at the edge appeared first on Appledore Research.

Ericsson emerges from the clouds

0
0

Ericsson held its analyst day in London last week. High up in The Shard, (surrounded by grey clouds, torrential rain and strong winds) we learnt about Ericsson’s new vision for 5G and the wider ecosystem that it could support. As a metaphor for the challenges that Ericsson is currently facing the setting for the meeting was apt.

5G will be an evolution not a revolution

At the event, Ericsson presented a clear and focused vision for 5G. Gone were the “5G will solve everything” statements of the past and in their place were clear, pragmatic and phased goals for 5G. The immediate 5G business case (for Ericsson and the CSPs) is now about the evolution of LTE with 5G New Radio. This 5G evolution will support a CSP in delivering the continued growth of consumer data volumes in an environment of flat revenue growth. In particular, this evolution will not require a change in the existing CSP business model.

5G can be an opportunity for growth for CSPs

Ericsson’s vision for 5G, as a longer-term engine for new services and CSP growth was also showcased at the event. Ericsson predicted up to 36 percent additional CSP revenue growth with IoT and further opportunities with distribution of cloud nearer the network edge.

Ericsson presented its IoT Accelerator platform, which provides a framework, enabled by 3GPP connectivity, in which IoT developers, device manufacturers and the CSPs can collaborate to create new services.  Ericsson’s distributed cloud showed the potential for a CSP to enable dynamic distribution of computing from centralized cloud through to placement at the network edge.

In both the IoT accelerator and distributed cloud, Ericsson clearly showed strong technical capabilities that could enable CSP growth. However, this opportunity is clearly limited by two challenges.

IoT and edge cloud still lack clear business models

Firstly, there is a lack of a coherent business model that would allow Ericsson, the CSP and third parties to monetize IoT or edge cloud value. A number of highly valuable IoT proof-of-concepts were discussed during the day. However, Ericsson had found turning these into commercial deployments was problematic, even with a willing customer. The challenges of satisfying multiple companies, with complex relationships, and conflicting  business models stopped commercial roll out.

CSPs still lack a global reach for services beyond connectivity

Secondly, the fragmentation of CSPs globally makes the creation of global IoT and edge cloud value-added-services, over and above network connectivity, problematic. A global player in IoT or distributed applications is looking for a global provider or market place. CSPs and their suppliers, like Ericsson, need to address this.

Ericsson has the opportunity to solve these challenges if it addresses 5G business model innovation as strongly as technical innovation. Its recent joint work with BT, on the business benefits of network slicing, points in this direction.

Appledore Research has recently completed a research report on the opportunity for CSPs at the edge. We will be exploring the economic model for edge computing in future research.

Image: freeimages.com/Marcus Gunnarsson

The post Ericsson emerges from the clouds appeared first on Appledore Research.


Appledore Research publishes 2018 MANO Supplier Assessment Report

0
0

The industry has made considerable progress in the past 3 years following our first report titled “MANO SUPPLIER SCORECARD: TOP PICKS FOR 2015” published in May 2015. Our research shows more CSPs have moved out of the PoC phase and investing in SDN and NFV deployments for mobile core, enterprise edge, RAN, and firewall/security based virtualized solutions (figure 1). We reflect this progress against the four management domains that we believe show a critical importance to future operational and business outcomes.

These four domains are Dynamic Inventory, Rapid, Automated Service Assurance (RASA), Modeling/Onboarding, and Policy Driven Orchestration. This report is a follow up to “2017 MARKET SUMMARY: CRITICAL ADVANCES IN ORCHESTRATION FOR AGILITY AND AUTOMATION” published in July 2017.

The full report expands Appledore’s update of the state-of-the-art in MANO and orchestration (“MANO+”), with a survey of industry progress, tables showing the progress of leading suppliers, and 22 “mini profiles” with our summary analysis.

Supplier assessment is useful for CSPs to understand the maturity of products and solutions and more importantly how each supplier fits into the overall MANO+ ecosystem. Suppliers include Aria Networks, Amdocs, CENX, Ciena, Cisco, Cloudify, CoMarch, Dell EMC, DonRiver, Ericsson, EXFO, HPE, Huawei, IBM, Microsoft, Moogsoft, MycomOSI, Netcracker, Nokia, Openet, Oracle, Spirent, and VMware.

Since our inaugural report the market has progressed significantly, from simple VNF instantiation, through initial steps to assure NFs and services, and is now evolving to support automation.

Automation is critical: we believe that virtualization can change the fundamental cost structure of the industry, making it vastly more competitive; but only with significant business, process and software technology changes.

Inventory is only slowly moving from offline, monolithic databases to the real-time, distributed and autonomous inventories that will be demanded in the future. It is being addressed in several ways:

·        MANO vendors building inventory into orchestration

·        Domain managers building inventory into their solutions (SDN-C, SON, VIM)

·        Assurance vendors building topology that either a) replicates or b) consumes domain data to provide an E2E view.

·        Part of the industry’s challenge is the inherent conflict between a distributed future, and CSPs’ need for working, complete solutions to today’s problems

Orchestration is transitioning from pre-defined workflows toward model and policy driven products. This is essential for many aspects of automation, from self-instantiation to controlling complexity and risk. Today, policy constructs are often rudimentary. Today’s operational models have not yet stressed the products – most VNFs reside in “FAT VMs” and most network services are relatively simple and relatively static. Yet, we are seeing substantial re-architecting of products in anticipation of future needs, and likely in response to CSPs’ current buying requirements. To understand differences, it is important to investigate the richness of models, and the extent to which rules can act on model parameters.

On-Boarding mirrors orchestration. The state of the art is vastly ahead of 2015, but since most VNFs are still not cloud native, neither are models. Even with cloud-native VNFs, the industry needs practice with granular models, and the DevOps artifacts needed to automate their lifecycles. In fact, most models and on-boarding processes only capture DevOps logic for existing NFs simply if at all. On the other hand, the rate of change is significant, and a few suppliers are investing in tools and communities that allow the industry to share best practices – which we believe is critical. We cannot afford to re-invent best practices for dozens of hundreds of CSPs worldwide. These innovators are called out in the mini profiles that are the core of this report.

Assurance shows the most embryonic progress, and the greatest variation among players. This is not surprising, since the industry (e.g.: ETSI) began serious work on assurance more than a year after basic orchestration. We are looking to a future where “assurance+analytics” provides a flexible, real-time platform to guide scaling, healing and insights and ultimately leverage Machine Learning to help identify problems and opportunities that elude human efforts.

A look through the mini-profiles illustrates the broad spectrum of assurance approaches. Some of these are in fact complementary (e.g.: real time data collection complements a real-time topology graph). Others aspire to deliver a mostly E2E assurance process, and a few truly re-think what assurance may be in the future.

Some of the areas CSPs must consider include:

·        flexible extensive topologies

·        focus on orchestrated assurance that complements traditional orchestration

·        integrated architectures that break down data and use-case silos

·        real-time operation of both assurance and analytics.

Successful assurance players will be those that enable the convergence between assurance, orchestration and software network function.

Communication Service Providers and Cloud Providers can receive a free copy of the the summary report and key findings by contacting Appledore Research directly.

Future articles will look at each of the four areas critical in the MANO market.

1.      Dynamic Inventory and Live Topology

2.      Model and Policy-Driven Orchestration

3.      Rich On-Boarding, supporting DevOps

4.      Rapid Automated Service Assurance, re-architected to support virtualization and automation

The post Appledore Research publishes 2018 MANO Supplier Assessment Report appeared first on Appledore Research.

Why Models Matter for Automated Network Operations:

0
0

New Research from Appledore Research Group plots the status and trajectory of the industry and leading players.

Appledore Research Group has recently released the 2nd major report in a series of 3 that investigates the necessary foundation to achieve flexible, efficient closed loop automation in virtualized networks.  This report is available here. Appledore Research Best Practices Part 2 report

The report identifies 4 key technology enablers; in this blog I will comment on two that are closely linked: First, Model- & Policy- driven orchestration, and Second, onboarding tools and practices to help Service Providers build, maintain and share those models and rules.

Without diving into the details, we wanted to emphasize that it is essential that orchestration be effected natively by models; that those models must reflect intent; and that rules (or other methods) must be capable of instantiating that intent based on context.  Any old model won’t do.

The good news is that nearly every orchestration vendor is delivering model based orchestration that utilizes rules to specify at least some parts of intent.  But it’s also clear that most are still leagues behind the “HyperScale” Web leaders (Azure, AWS, GOOG, BlueMix) in terms of true hands-off automation, and fully intent-based, autonomous operation. That is why Microsoft’s Azure, despite being deployed in adjacent industries, is shown as the most evolved single system in our industry progress chart.  Microsoft documented and defended both the method and the evidence of truly intent-based, autonomic operation, at huge scale. For the most part, telecom examples have been less intent based, vastly lower scale, and aimed at very limited use cases.

In the core of telecom orchestration we are seeing progress.  Major players have evolved on our stated axes, including Nokia’s CloudBand, Ericsson’s Cloud Manager / Dynamic Orchestration, and Netcracker’s 10.  More importantly all of these claims are supported by major Service Provider deployments operating real, cash-generating services, at meaningful scale.  Others have documented less; HPE, Oracle and Huawei provided relatively less information on their ambition for automation, implementation specifics, and deployments and metrics achieved. Cloudify, a disruptive entrant, demonstrates a very clear ambition to implement clean model-native, generic orchestration. (Watch for upcoming research on what intent encompasses and how to best achieve it).

From conversations with suppliers, as well as with Service Providers, we believe that the modest pace of observed change is less a result of suppliers’ capabilities, and more one of industry ambition: many suppliers indicated that their Service Provider customers – who determine what is actually delivered after all – are hesitant to fully turn over control to “The Machines”. We are also told that those SPs are focused on implementing very specific service and automation tasks, rather than designing orchestration methods that are truly generic (and who’s operation is entirely determined by the model and context). If the goal is to implement something well understood and relatively simple, sometimes its easiest (and cheapest, and quickest…initially) to simply create the logic rather than define a universal, model-driven mechanism. And from what we observe, some of that occurs.

We encourage anyone interested to read the series of three framework research reports that outline:

  1. What is the state of “MANO+ orchestration” as of 4Q2017?
  2. What is the ambition, status and direction of 20+ leading players?
  3. What are the best practices to achieve true automation?

The agility and profitability of our industry depend on success.

Grant

grant@appledorerg.com

The post Why Models Matter for Automated Network Operations: appeared first on Appledore Research.

Key Findings from MWC in Cloud Management – 1

0
0


The big themes for MWC 2018 in Telco cloud management software were focused on automation, cloud native readiness, and machine learning. The Appledore Research analyst met with 55 companies during the 4 day event including CSPs, incumbent suppliers, and new entrants. Our roundup of the event will be published in four segments. CSPs are deploying NFV and SDN in small scale deployments. In fact, making the business case and trying to change the culture are the two biggest obstacles now – not the technology itself. Unlike the POC days of NFV, CSPs need to focus on ROI justifications for virtualization and the next wave of technology deployments. This can be summarized into three buckets

  1. New revenue justification which today is appearing to look more like revenue substitution. Consider the case for SD-WAN which is technology substitution. The reality is CSPs are not moving at the speed of OTT or web-scale cloud providers so generating new lines of business rarely occur.
  2. Improve profit margins focusing on reducing OPEX. VNF deployments moves CAPEX cost down marginally so unless automation is increased, and manual processes are eliminated OPEX does not go down leading to no change in profit margins.
  3. Differentiate from core competitors with an increased focus on the customer. CSPs have been doing this for years mostly in the managed services and mobile market segments. The benefit of NFV and SDN is agility. That should improve CEI scores further if implemented correctly which means offering true self service, scale out network and infra fabrics to cut order to cycle times from months to minutes.

None of this is new. We have been espousing these tenets for 3 years but the CSP market is moving at a tenth of the rate of true digital platform providers.
Understanding the where and why to deploy edge computing and other technologies is more important than the how. Fiber to the traffic pole is a 5G business case we have heard but it won’t work if the local municipalities expect the technology to last 25 years. Sending a guy out for the next upgrade breaks the business model.
Check out our recent white paper on Automation in the Cloud Native Hybrid Network for more insight at Cloud Native White Paper – Appledore

The post Key Findings from MWC in Cloud Management – 1 appeared first on Appledore Research.

Research Note: EXFO Completes Acquisition of Astellia

0
0

Germain Lamonde and Philippe Morin are quietly disrupting the market for service assurance in a segment known as the network test and monitoring market. EXFO is not going the route of Netscout which completed a USD 2.3 Billion mega acquisition of Danaher Communications Business in 2015. Netscout is still getting thru the integration process which admittedly takes time given the size and diversity of business units in the Danaher companies. Remember Danaher acquired many businesses including Tektronix, Fluke, and Arbor Networks and did little to integrate them before it was sold off to Netscout. Instead EXFO is selectively picking up smaller undervalued assets in its core market that complement its core business. R&D teams can integrate the newly acquired asset faster and eliminate overlapping capabilities which helps customers reduce the fatigue associated with swivel chair management. EXFO is expanding its total addressable market and acquiring high quality customers allowing it to cross-sell active, passive, and advanced analytics solutions across the entire customer base.

EXFO completed its acquisition of French based Astellia which was announced last week at MWC2018 in Barcelona. Appledore Research estimates it will add another USD 45 Million in topline revenue in 2018 and give EXFO greater access to CSPs in the Europe Middle East Africa region. Orange France is a long-time customer of Astellia and its largest customer by revenue. Astellia has an established footprint in many of Orange Operating companies in Europe, Middle East and Africa. Other key customers include 3, Telefonica, Bouygues Telecom, Altice and Zain. The combined assets now give EXFO a leading position in the mobile active and passive test, monitoring, and subscriber analytics market.

Astellia top line revenue has been in decline since 2016 (figure 1). Our intelligence indicates the company has been seeking strategic options since 2014. EXFO disclosed that the deal was valued at EUR 25.9 Million which is 0.67 price to sales of 2017. We view this as a good deal for EXFO and Astellia. EXFO acquires complementary products to its core business and an outstanding customer base in EMEA. Key business deployment metrics of the combined company include:

  • More than 250 monitoring systems in operation
    • 120 Nova passive probe systems deployed by Astellia
    • 80 active solutions deployed by EXFO
    • More than 50 NQMS fiber monitoring solutions deployed
    • 15 Ontology deployments focused on rich topology multi-layer graphs
  • Unique combination of active testing and passive monitoring capabilities solving complex challenges in closed loop automation in the NFV/SDN domain
  • Selected by 3UK as the primary supplier (Astellia) in the pre-test, validation, and assurance of VNFs deployed in its network

EXFO management has proven its ability to integrate acquired companies in the testing and assurance market to create value for customers and its shareholders. We think EXFO will bring its core products and the newly acquired Ontology and Astellia products to address the future automation tools required in network operation centers.

Figure 1: Revenue 2015 – 2017 by company:

The post Research Note: EXFO Completes Acquisition of Astellia appeared first on Appledore Research.

Mash Ups, Micro-services, Re-use and Agility at MWC

0
0

At MWC this year I saw a budding constellation of similar thinking around “micro-service” innovation.  This is a very positive trend.

Every service provider wants greater agility, easier innovation, lower innovation costs and reduced ongoing maintenance burdens.  No surprise here.  Yet in our industry, most innovation methods and supporting software are essentially one-offs, with significant development times & point to point integration. This leads directly to slow rollout, high costs, and the “gift that keeps on giving” – the need to continually maintain and re-integrate these complex processes.  Its one reason that such a high proportion of IT budgets are expended on simply keeping the existing environment running – leaving far less money and manpower for innovation.

Appledore Research is not alone in promoting the need to switch to more efficient methods: typically those of service-oriented (“micro-services” to use the latest buzzword) components, composed into complete services, and fostering a high degree of re-use.  By re-using “services”, this allows orders-of-magnitude improvements in agility and cost (its already built).  Moreover, if you re-use a service object, you maintain it and integrate it ONCE.  So that ongoing expense comes down steadily over time.  This gift continues giving in a much more positive way.

Enough theory.  At MWC I saw two very different, complementary and interesting approaches that begin to show the way forward.  One, Ribbon’s “Kandy”, is reasonably well established.  The other, Amdocs’ “microservices360” is just announced on day one of MWC and is therefore less well documented.

Functionally, the two could hardly be more different.  Kandy is in the network communications space while microservices360 is in the BSS/OSS space.  Kandy is a set of hosted communications services that can be “mashed up” and connected to one or more CSPs and enterprises environment.  It can be used to build entirely new CSP services, or to integrate together existing service in two or more providers (or enterprises).  Amdocs’ microservices360, on the other hand, is a services-oriented BSS/OSS development framework intended to deliver better cost and agility to new OSS/BSS innovation.  To quote Amdocs, it is “a deployment and development platform based on vetted and tested open source tools”, and is delivered as part of new Amdocs functionality.  Once in place, either Amdocs or the CSP can add new micro-services and use the microservices360 environment to integrate disparate systems or create a new functional flow  (e.g.:provision a new service, using resources across existing OSSes as well as new micro-service functionality).

In concept however, the two could not be more similar: each is a library of re-usable, loosely coupled functionality, an API environment and each brings an agile, service oriented approach to innovation.  Moreover, each is well suited to integrating existing, disparate environments:  Kandy can link into many CSP and enterprises environments acting as a sort of gateway when needed, and microservices360 serves a similar function between existing, typically older tech, OSSes and BSSes.

Some day we will have entirely modern, service-based OSS, BSS and network (e.g.: orchestrated services in the network).  We will have well structured catalogs of customer- and network- facing functionality that can be strung together by NFV-Os and service orchestrators.  But today we must do our best to make the existing, hybrid infrastructure agile.   We have a large embedded base of OSS and network functionality, small pockets of new functionality in both domains, and the need to innovate and link existing functionality more efficiently and to build, step-by-step, toward an agile future.  Services-Oriented frameworks, along with essential “services” are the best path forward.

Appledore Research believes that these two offers highlight a promising trend and encourage Service Providers and software suppliers to continue the momentum – both in the network and in the OSS/BSS/MANO+ environments.

Grant

The post Mash Ups, Micro-services, Re-use and Agility at MWC appeared first on Appledore Research.

Viewing all 205 articles
Browse latest View live




Latest Images