Quantcast
Channel: Dell EMC | TelecomTV
Viewing all 100 articles
Browse latest View live

Security & Programmability, in the Virtualized Network

0
0

Ramki Krishnan, Distinguished Engineer and Chief Technology Officer of NFV, Dell Networking.

Are discussions about networking, SDN and traditional data center virtualization becoming blurred, and how is software-defined networking and network programmability becoming more of a challenge for our industry? Here at the Dell TIA NOW Studio at TIA 2016, TIA NOW will look into this topic with Ramki Krishnan, Distinguished Engineer and Chief Technology Officer of NFV, at Dell Networking.


Wrapping MWC 2016 with Dell - What Have We Learned?

0
0

Dell's CTO and VP of Technology Strategy Paul Struhsaker joins us to wrap up MWC 2016. Struhsaker talks about what new trends and insights came out at this event and what to expect in the coming year for the tech industry.

Connect What Matters: IoT in a Smart City

0
0

Dell’s moniker for Mobile World Congress 2016 is “Connect What Matters," and for good reason. Connecting M2M industrial devices has quickly advanced into critical applications in smart cities, connecting infrastructures like bridges and dams. But now there are new solutions in the smart city that are benefiting from IoT technologies and a new technology called edge computing. Join TIA NOW at MWC 2016 as they explore the next evolution of IoT in a smart city with Liam Quinn, Vice President and Dell Senior Fellow and Bill Morelli, Director of IoT, M2M and Connectivity at IHS.

Telecom Leaders Talk IoT and Next-Gen Networking

0
0

TIA’s CTO Council Industry Panel with telecom industry leaders was filmed following TIA’s CTO Council meeting and preceding Dell World 2015 in Austin, TX. The panelists discussed the future of IoT and NFV on next-generation networks. The panel included Drew Schulke, Executive Director of Next-Generation Infrastructure at Dell; Tim Abels, IoT Architect at Intel; Bruno Rijsman, Vice President of Architecture at Juniper Networks and Manish Jindal, VP and Head of Strategy Development and Portfolio Management at Ericsson.

IoT Solving Real World Problems

0
0

Dermot O’Connell, Executive Director and General Manager of OEM and IOT Solutions EMEA at Dell, explains the benefits of Dell’s IoT gateways, giving several use cases in which customers were able to implement Dell’s IoT gateway solutions in order to solve a real world problem.

Dell and Intel’s IoT Partnership

0
0

Intel’s long-standing partnership with Dell is important for the IoT community, said Rose Schooler, VP of IoT Strategy and Technology at Intel. TIA NOW got this and much more from Schooler at Dell World 2015, as she talked specifically about Dell’s IoT gateway solutions.

Dell’s IoT Gateway Solutions

0
0
Dell's OEM partners and customers use Dell solutions to help their own customers flourish. TIA NOW walks through Dell customer use cases featuring ELM Energy implementing Dell’s IoT gateway solution in order to strengthen their microgrid. Included in this segment are interviews with Rose Schooler, Vice President, IoT Strategy and Technology at Intel Corp; Dermot O’Connell, Executive Director and General Manager, OEM and IOT Solutions EMEA at Dell; Ron Nash, CEO at Pivot3; Paul Prince, CTO of MangStor; Mazin Bedwan, Co-Founder and President of V5 Systems and Chase Sanders, Business Development Manager at ELM Energy.

Dell's IoT Practice

0
0

Dell has implemented the OneFiveTen framework for strategic thinking of its IoT action plan. Joyce Mullen, VP and GM for Global OEM and IoT Solutions for Dell, will discuss some of the latest developments from the Dell visioning and planning OneFiveTen meeting, and steps that are being taken to grow Dell’s presence in IoT. This session is hosted by Limor Schafman, Director of Content Development for TIA.


Moving From Mobility to Applications: Dell IoT Solutions

0
0

"Moving from mobility to applications for mobility is what a large part of Mobile World Congress 2016 is all about," said Paul Struhsaker, CTO and VP of Technology for Dell. Struhsaker adds that sourcing the right vendor partners is very important to launching the right IoT solution for your customers. Join TIA's Limor Schafman, Director of Content Development, as she finds out more from Strushaker about what we can expect see in the IoT space in 2016.

Telcos' NFV Solutions Gain More Steam On An Open Cloud Platform

0
0

Moving from network lock-in to open source for NFV is partly due to do the work of Jeff Baher, Sr. Director of NFV Solutions at Dell and Chris Wright, Chief Technologist at Red Hat. Supporting software-defined networks for storage, compute and network capabilities on an open network platform has new requirements. Telcos are more than willing to welcome an open cloud platform for network communications and to further explore the benefits of open source for NFV solutions.

Network Functions Virtualization: Are We There Yet?

0
0

How far have we come to demonstrating and implementing an open, cloud-based NFV model to further support transformation within the telco industry? Joining us from MWC 2016 are Brian Higgins, VP of Network Planning at Verizon; Chris Wright, VP and Chief Technologist at Red Hat; Gee Rittenhouse, SVP and GM of the Cloud and Virtualization Group at Cisco and Drew Schulke, Executive Director of Next Generation Infrastructure at Dell.

Data Center Solutions: A Business Case

0
0

Advanced compute and server solutions are the backbone that support billions of device connections and are based on open standards to help maximize compatibility, scalability and expandability of our communications networks. How are these new server and storage solutions helping telecommunications companies bring their intellectual property to market faster and ultimately help lower operational expenses, increase profitability and simplify operations? At the Dell TIA NOW studio, network operators will explore the latest OEM solutions in the data center, that promote reliability and cost-effective components for high volume and hyper-scale environments.

The Network is Breathing In Capacity- C-RAN & vRAN

0
0

Sports stadiums and crowded shopping malls often require peak capacity from their network to support thousands of high bandwidth devices. But now network topologies are less cost prohibitive to provision and build with NFV and virtualization technologies. Linsey Miller, Vice President of Marketing for Artesyn, tells TIA NOW how C-RAN and vRAN are changing the way we provision our networks.

BLOG: taking IoT to the edge

0
0
1920-1080_Fog_image(2)
  • 'Fog Computing' will answer real capacity and latency problems
  • Why the Fog trend should be the CSP’s friend
  • CSPs can either host edge computing or provide it as part of a service

Fog: It’s a catchy IT name for the process of distributing processing and storage back out towards the edge of the network where, its protagonists argue, it is needed to rebalance the cloud architecture. The current general purpose Cloud IT model, with end devices attached directly to a central data centre is not universally optimal as applications become more demanding - especially in the Internet of Things (IoT) domain where latency and sheer data volume are projected to become major issues. Fog Computing is part of the answer.

The good news is that the trend should be the CSP’s friend. Fog needs a network edge to play from and that offers access network operators the opportunity to provide compute facilities connected to their communications networks.

That’s the Fog pitch, but how well does it map onto IT and network reality? Is this really just a “Hey, remember me!” strategy for box vendors and CSPs who feel that corporate enthusiasm for pure Cloud is elbowing them out?

No is the answer. There are some real Cloud challenges looming and Fog computing, in one form or another, looks like being part of the solution.

Solutions to round trip delay and sheer volume

Metaphorically, we can think of the ‘Cloud’ as high up and thereby able to serve a ‘footprint’ of millions of end devices from horizon to horizon. On this basis Fog describes a thinner layer of resources much closer to the ground (with a narrower purview) but able to serve some applications better by being a short ‘hop’ or two away and therefore more responsive to the end system.

For some applications - especially in IoT - Fog might also find itself performing triage on data straining to get to the cloud by doing some preliminary sifting to analyze and perhaps distil, throwing out the unnecessary or repetitive.

So the first job for Fog is to solve the round trip delay problem for some of the data heading from the edge to the middle of the cloud. Until somebody finds a way to beat the speed of light we are stuck with long-distance fibre transmission delay which, the experts say, has to be overcome if the promise of things like driverless cars or remote surgery are to come to fruition. The only way that the necessary latency can be achieved for these applications is if the journey between end device and the critical data it interacts with is right at the edge of the network - enter Fog computing.

Perhaps the biggest long-term challenge to the central ‘Cloud’ as we currently understand it, is IoT. At the moment we’re imagining billions of ‘things’ just popping up at long intervals to send a few bytes of data. But the fact is that not all applications are going to be so undemanding. It’s already possible to see today’s simple, telemetry-style applications being beefed up to return more and more data over time.

Take the humble domestic boiler. It can be rigged up to return stats on its power usage and can even be controlled to maximise efficiency and reduce bills. All well understood as a worthy metering application today. But what if it could return a constant stream of information on the state of the boiler via sensors? That might enable a central system to use big data analysis (feeding in data from all the boilers of the same model) to be able to predict from tell-tale signs (vibration, overheating, lowered pressure) an imminent failure and to replace the boiler before its owner even knows there’s a problem.

But that beefed up boiler app will generate a vast amount of data - perhaps several state notifications a second - to identify the fatal pattern. Multiply that by thousands of boilers and the data volume could overwhelm both the network and the cloud storage and compute facility. However, Fog facilities could aggregate a few hundred boilers at a time, distilling the data for each boiler and perhaps forwarding only the ‘exceptions’ to the central Cloud with network overload thus prevented.

Oscillating gently

It’s not that ‘cloud’ is somehow ‘wrong’ or is going to be replaced. Fog is just one more response to the continually shifting balance of advantage between centralized storage and processing (economies of scale, ability to analyze huge data sets, processing flexibility) and distributed computing and storage (local control, reduced network costs, increased responsiveness).

That oscillation began when the first mainframe computers spawned disruptive mini computers, decentralizing processor power and storage to the departmental level. It’s a process that accelerated again with the advent of the PC. Then it turned around and headed back towards the center with client-server computing. Now with Cloud we’re close to mainframe style ‘peak central’ again, so the advent of Fog might be seen as the latest oscillation away from the center and towards the edge as the underlying technologies and applications change flavour again.

How might CSPs benefit from the latest oscillation?

Many Fog scenarios will involve Customer Premises Equipment (CPE) such as on-premises servers which might play the primary data collection role for an estate of sensing devices (say). CSPs are clearly already in the CPE game and where ‘edge of network’ actually means customer premises they are well placed to play a provisioning role.

But perhaps more importantly, CSPs can use their distributed network facilities to either host edge computing or provide it as part of a service. Options here include old central offices/local exchanges, secure street cabinets or (if a mobile operator) on Radio Access Network (RAN) poles and towers. Any - and probably all - of the above are likely to be pressed into service.

So the good news for CSPs is that though Cloud will certainly deliver businesses and consumers ease of access to always-on applications, compute power and storage along with reduced costs, Edge or Fog compute will be required to deliver the last mile in performance and efficiency.

This blog is the fruit of a discussion betwen Ian Scales, Managing Editor, TelecomTV and Brent Hodges, Internet of Things (IoT) Planning and Product Strategy and Open Fog Board Member at Dell.

 

Dell and EMC tie the knot and close their $67 billion merger

0
0
michael dell
via Flickr © Joi (CC BY 2.0) Michael Dell

Nearly a year ago Dell announced that it was going to merge with EMC, the huge storage and data management vendor in an eye-watering takeover deal worth US$ 65 billion.  As we noted at the time (see - Dell’s $67 billion bet on the one-stop-shopit was “a bold move considering the sad history of technology mergers and acquisitions where the bigger they are the harder their share price tends to fall in the aftermath. Dell (the man) obviously likes a challenge.”

Today is the day the complicated M&A deal ‘closes’ and it leaves Dell with a new name for EMC (see logo above) and sees Dell Technologies become the world’s largest privately-controlled technology company, incorporating a range of capabilities which it hopes will enable it to carve out a strong position in the hybrid cloud and emerging ‘fog’ markets (see Blog: taking IoT to the edge).

The Dell empire now comprises what the company describes as a unique family of businesses   including Dell, Dell EMC, Pivotal, RSA, SecureWorks, Virtustream and VMware.

logo

Why EMC?

EMC emerged in the 1990s as one of the disruptive specialist players effectively prising apart the hitherto vertically integrated enterprise computer industry, dominated by the likes of IBM.  EMC’s specialisation was data storage and the tools enterprises needed to manage it. So, either by design or good fortune, it effectively lined up with the likes of Microsoft (dominating the desktop), Oracle (dominating the database market), Cisco (routing), and Dell (at that time dominating the enterprise PC market, now strong in server technology), and so on, to de facto form a  ‘horizontal’ layer of specialist, best of breed players who promptly set about knocking the old guard off their enterprise pedestals.

The moment an industry architecture such as ‘best of breed’ is established, of course, the forces of technical disruption set about sneaking up on it. Which brings us neatly up to the 2010s when virtualisation and the data centre - and of course cloud computing and cloud services - have come to reign supreme. A new reintegrated set of capabilities now look good on Powerpoint and Dell has set about drawing them together with this deal to see if they can work as well in the real world.

Analysts point out that, unlike other megamergers, Dell has a good chance to make good thanks to it being a private company unbuffeted by activist investors and quarterly targets, and therefore able to provide a stable environment to pace itself against long-term goals, rather than worrying all the time about quarterly reporting requirements (as perhaps others have done).

An important part of the Dell line-up now is VMware which comes along with EMC since it owns a majority share. VMware gives Dell control of the pivotal virtualisation software piece in its new data centre equation and the capabilities it now has sits well with an ambition to champion the  ‘hybrid’ cloud approach where it can provide a solid alternative to the growing power of the public ‘webscale’ cloud market. That segment is lead by AWS but also involves other huge players such as Microsoft and Google.

In effect, Dell is calculating that the  private or hybrid cloud approaches can be made to combine all the advantages of classic cloud as a technical application while offering  the user organisation all the comfort and security of having the whole thing housed in a private data center and under direct control. Let battle commence.


How Dell supports telcos from cloud to edge

0
0

Brent Hodges, IoT Planning and Product Strategy, Dell EMC

All telcos are interested in how they can offer the next generation of IoT services to their customers, Brent tells Guy Daniels. And Dell is aiming to help them meet their own customer requirements in the IoT space, from edge to core to cloud. Most important will be architectures that can scale from the edge to the core and that can be managed by IT professionals with openness a prime requirement so that the ecosystem can rapidly evolve.

See our Fog blog 

Filmed at: Mobile Edge Computing Congress, 21-22 September, 2016, Munich

Solving the IoT Challenge: TIA's CTO Council 2016

0
0

As network business and digital services come together and telcos expand revenue opportunities through IoT adoption, what challenges exist not only in IoT adoption, but also to develop, deploy and monetize new connected services? At TIA's CTO Council Meeting in Austin, TX, TIA NOW covered these issues with industry leaders including Amit Tiwari, VP of Strategic Alliances and Systems Engineering at Affirmed Networks; Brent Hodges, IoT Planning and Product Strategy at Dell EMC; Godfrey Chua, Principal Analyst at Machina Research and Sameh Yamany, CTO at Viavi Solutions.

Digital Transformation: "The Next Industrial Revolution"

0
0

Jeff Baher, Senior Director of Service Provider Solutions at Dell EMC, talks to TIA NOW about the shift in the service provider space as compute in the core moves to the edge of the network.

Dell EMC On Network Transformation

0
0

Tord Nilsson, Director of Global Marketing at Dell EMC

Tord Nilsson, Director of Global Marketing at Dell EMC, tells TIA NOW where we are on the journey towards network function virtualization, and what the effect is of merging EMC and Dell with respect to telecom. 

Blog: Let’s drop ‘virtualization’ from NFV and move on

0
0
1920x1080-SDNF-image(2)

Here’s an idea. Why not get rid of ‘virtualization’ when we’re talking about next generation telco networks?  Not the technique, you understand, just the word.

Why? Because there are signs that its continued use is beginning to constrain our thinking. After all, the industry doesn’t want to engineer ‘virtual’ versions of what has already been committed to hardware. It wants new stuff, agile stuff. It wants a fresh start.

Four years ago the original NFV white paper introduced the revolutionary and liberating concept of Network Functions Virtualization (NFV). The idea was to emulate the clear advantages the IT industry was enjoying, having virtualized applications in the data centre and then having utilised open source software and commodity server hardware to build massive ‘web scale’ clouds which enabled them to run the vast applications (Google, Facebook, AWS) which today dominate the Internet.  

It was not just the scale economics which attracted envious looks from telcos, but the use of open source software to provide an agile, open environment where software development and operations (Devops) worked hand-in-hand on a never-ending cycle of innovation and service improvement.

If telecoms could have this sort of capability, it was thought, it could see off the challenge from the dreaded ‘OTT’ players eating the telcos’ lunch.

Four years of frenzy

The last four years have spun the traditional telecom infrastructure providers into a technical frenzy as they tried to prove that they could make telecoms virtualisation work.  Very often they took the code driving their ‘black boxes’ and made virtual network functions (VNFs) out of  it to drive the ‘white box’ environment. After all, we were told, compatibility with the legacy environment would be important - telcos weren’t just going to throw everything out and start from scratch when they had already invested many billions.

But something has gone wrong on the journey. It turns out that onboarding, integrating and managing the virtual network functions (VNFs) in practice is actually much harder to do and more time-consuming to engineer than at first thought. Rather than just load up with functions and go, telcos found that deployment took a long time and that they had to ‘dig in’ to the platform and twiddle when they were told that a ‘proper’ cloud environment meant that applications could be spun up and would ‘just work’.

It turns out that you can’t create an agile service and applications development environment by ‘virtualizing’ slabs of legacy code. That’s part of the problem, not part of the solution.  

So here’s the alternative

We think a new arrangement of words can be a good way of turning the page and we may already be halfway to exiling ‘virtualisation’ anyway. We’ve noticed at industry forums that the ungainly SDN/NFV nomenclature is already being informally shrunk to SDNFV in speech. Given that the acronym is now in the washing basket, let’s pop it into the washing machine  -  on the hottest setting  - and shrink the official written form down to Software Defined Network Functions (SDNF).

Doesn’t that sound better? So ‘virtualisation’ though still vital, becomes a technique, not a destination.

What changes?

Now that the underlying infrastructural arrangements have been agreed, the industry needs to develop a new set of tools to meet the network’s commercial objectives as it transitions from being a technically driven, bottom-up evolution that deploys tools like virtualization and vSwitches, to an economically or business model-driven evolution that will require a very different set of tools to make it work.

Tools such as redeveloped cloud native applications (applications which are not derived from legacy systems but built from scratch for the cloud) and decompositions of those  big software chunks into much smaller functions, making  the applications independent and faster to deploy.

Tapping the real benefits of open source

For the same reason it’s dangerous to think, as some do, that we need a ‘special telco cloud’ tuned for exceptional telco requirements.

This is often advanced as a cure for the telco-grade deficit that some believe is lurking in the cloud. In fact, as far as resilience and availability is concerned, the cloud provides an excellent backstop, especially with disaggregated functions. Instead of trying to add extra 9s to the 99.9x systems reliability measure, the better course is to have redundant functions available on standby, ready spin up somewhere in the cloud and shoulder an extra load if a system should fail.  

And don’t let’s forget that circumstances always change. At the network’s edge the error-free operation of millions of IoT devices may not always be that all-fired important. If there’s an error, there’s always retransmit.

So we don’t need a special telco cloud, just ‘the’ cloud - a collection of software, crowd-sourced for re-use by telecoms, IT, industry verticals, government and so on, especially as network and cloud intimately engage to provide next generation services. Speed to market and agility is everything these days and we need to move forward with the IT industry, rather than be semi-detached from it.

Conclusion

The ‘Cloud native’ approach - where functions are coded from scratch rather than ‘virtualized’ from existing code, is the preferable way forward.

And perhaps as important, we should accept the fact that there are two separate industries operating here - platform and infrastructure being one and software and applications the other.

To keep them honest and separate, there’s a need for independent actors - perhaps with new or non-standard business models -  to help choose and integrate things in a standard way without the risk of being accused of attempted ‘lock-in’.

This blog is the fruit of a discussion betwen Ian Scales, Managing Editor, TelecomTV and Tord Nilsson, Director of Global Marketing at Dell EMC.

Viewing all 100 articles
Browse latest View live




Latest Images