This is - Helping you get the best performance from your modem
 Troubleshooting News Technical Search
 Home Forum 56 Premium Site Map
Home Why 56k=v.Unreliable 3Com Responds to ISP Complaint  

Letter from 3Com Total Control Product Manager In Response to some ISP complaints & suggestions

The following letter was sent to 3Com by a group of its ISP Customers -
"The Top-10 Gripe List":         

The Total Control Top Ten Unresolved Issues

compiled by Allen Marsalis, ShreveNet, Inc.
I have compiled this "Unresolved Issues List" from dozens of posts to the
usr-tc mailing list and private emails that I have received, with the sole
purpose of presenting this information to USR/3COM management in an effort
to improve communications between USR/3COM and it's high-end customers. It
should also be noted that many USR/3COM customers (myself included) are
very satisfied in many respects with their TCS purchases. Outlining and
organizing current problem issues in no way implies that anyone is fully
discrediting 3COM or it's products. Only that many of 3COM's ISP customers
are stuck on various short and long term problems whose solution would mean
much greater overall customer satisfaction and perhaps greater sales.
But are we "politically" motivated to solve our problems, enhance our
services, and increase the functionality of our equipment? You bet!
And I believe our efforts to resolve issues are mutually beneficial across
the board. This is a very informal report so I took the liberty to add
some comments and suggestions that I felt might be informative or
I have collected and stated various problem descriptions, terms,
effects, and perhaps more importantly, tried to focus on how particular
problems affect Total Control owners/admins and their overall attitude
toward these issues, the Total Control System, and it's producers.
We are talking about long standing issues like Quake Lag and OSPF support.
Many feel it's important to communicate the endurance of many of
these issues and how many ISP's are affected and frustrated by the
apparent lack of attention given toward real solutions.. We are willing
to help present current problems in a proper and positive manner being
as informative and sincere as possible. In return we hope that 3COM
management will take notice and consider taking definitive steps toward
resolving some of these issues to everyone's mutual benefit.
Now that I have laid the ground work I would also like to disclaim
myself from the material that I have compiled from lists/email.
Although I have experienced some of these problems in my own network,
I most certainly have not experienced each and every one! In other
words, please don't "shoot the messenger"         

Issue#1 UDP Packet Latency and Loss (aka Quake Lag)

Quake players dialing into access networks hosting TC netservers have
long experienced packet latency and loss resulting in unfavorable
to impossible game play. These same players with their same equipment
and settings can dial into non-TCS networks and receive substantially
lower "ping times" and better overall play. In effect, anyone playing
quake though a Netserver is handicapped if not out of the game.
This issue was reported 7 months ago or longer. And for most of those
months, it's been generally know that the culprit is the Netserver..
The HiperARC does not exhibit the problem at all or not to near the
degree as the netserver. At the other end of the spectrum, a 48 port
Netserver bundle (1706) with the "double-up" kit installed yields
ping times off the scale. (and therefore very unfavorable game play)
Please note that an ISP does not have to host a quake server for this
to become a big problem. Any ISP is likely to have some customers
who play quake on servers out over the net. When router hop latency
is added to the "quake lag", it's is impossible to play under some
Total Control configurations.
Proposed Fix:
Purchase HiperARC's (and break MPIP). No upgrade available.
No apparent fix for the netserver.
USR Engineers and ISP's have witnessed as much as 60% packet loss in UDP
communication between a client and a TC Hub. A simple client/server was
written, the server running on the ISP LAN, and the client running on the
users dialup connection. UDP packets are sent out, and then totalled on
the server. Over the last six months, the problem has been fully examined
and researched without resolution.
This is the only problem in the list that I will comment on personally.
(due to my considerable experience and expense at solving the problem)
Our local competitors use 486 linux/cyclades boxes with (ironically)
sportsters and don't have the problem.. USR management lead me to
believe that there would be an upgrade path at reasonable expense for
us Quake laggers. But the upgrade never came about and we were loosing
customers and were forced to upgrade to HiperARC's to rid ourselves of Quake lag.
ShreveNet, Inc. has spent over $12K to fix "Quake lag" which is a problem
that should have never existed. In fact, due to extreme price fluctuations,
we were actually penalized for solving our problem early on compared to others.         

Issue#2 Missing OSPF support

OSPF is an interior routing protocol that is currently implemented on
many products by Cisco, Livingston, and others, but is missing from
the current TC Netserver and HiperARC. This feature has been long
promised by USR, and is a glaring hole in the TCS feature set.
OSPF is superior in many ways to it's predecessor, RIP, currently
available for the Netserver and HiperARC. "Link state" algorithms
used in OSPF to propagate routing information are vastly superior
to "vector distance" algorithms as such in RIP.
OSPF would allow better control and scalabilty of static IP address
assignment of our customers, no matter where they dial in. In short,
extreme delays exist between TCS software revisions when compared to
others such as Cisco and features such as OSPF never make it.
There are many reasons why OSPF is needed and why RIP is an inferior
protocol such as:
1. "count to infinity" problem: RIP like most routing protocols uses a
metric to declare its administrative distance so that a router can give a
particular route a preference. This administrative distance (metric) is
incremented one time per hop. The problem is that with RIP, 16 is
"infinity", that is to say you cannot have a metric above 16 with RIP.
Each time a packet travels through a router, its TTL field (with an
initial value of 15) is decreased by 1. When the TTL value reaches 0, the
packet is discarded and no longer exists on the network. The idea behind
this is to stop packets that are caught in routing loops, instead of going
back and forth and never stopping, the TTL will be decreased to 0 and it
will die.
We need the TTL value to be large enough for a packet to traverse a
network of whatever size we wish to implement. For most ISP's the "time
to infinity" problem isn't that much of a problem. But for larger ISP's
this can be an issue. Other routing protocols get around this issue and
don't have the "count to infinity" problem. Out of the big 4 interior
routing protocols today, most people would prefer RIP the least.
2. Route convergence. On larger networks, convergence time is
unacceptably slow (above 5 minutes) for most applications.
3. Routing Loops. Split Horizon with Poison Reverse Update algorithm
only counters routing loops between adjacent routers. Other routing
protocols use more sophisticated mechanisms to counter larger routing
loops, allowing the ISP to use a zero hold-down value, which speeds up
convergence time.
4. RIP updates use more bandwidth than other routing protocols. Once you
grow to a considerable size this becomes unacceptable. RIP sends the
entire routing table in an update, unlike other routing protocols.
Proposed Fix:
Netserver code upgrade promised well over a year ago.
No word (or hope) at all on HiperARC.
The presence of this feature would be more on par with what's out
there in the marketplace.         

Issue#3 Unresolved/unresolvable issues with Tech Support

Whenever a user is under a contract for technical support, they feel
that they should be rewarded for discovering a new problem with more
than just a ticket number. Certainly a follow-up call back would be
appreciated, even if the issue is unsolvable. When someone waits
through the hold queue and sheds light on a new problem never heard
about, as well as paying for the privilege, they deserve an
quick and fair answer. It sometimes appears that your most expensive
equipment is supported the least with issues dragging out for months
without word of resolution. Just about everyone agrees that Cisco's
Tech Support should be the model to go by. And Cisco is therefore
rewarded with great market share. In short the problem is preference
towards press releases rather than customer support.
Many times 3com tech support is not familiar with the "open issues"
for the product, which are publicly available on the Totalservice
website. They act as if the problem reported (i.e. quake lag) is
unknown to 3com.
At times, level 1 tech support does not seem familiar with the basic
trouble shooting procedure outlined in the very manuals printed by 3com.
ISP: "The 'Modem Unavailable Counter' on my HiPerDSP modem keeps
incrementing, and my users are getting busy signals!"
Tech: "What do you mean 'Modem Unavailable Counter'?"
(the modem unavailable counter is one of the first things you look
at when troubleshooting problems, and this is outlined in the
troubleshooting portion of the manual)
ISP: "When you do a 'sh spnstats' it shows the counter."
Tech: "Show what? where is this command? Let me telnet to your HDM"
ISP: "You can't telnet to HDM's they don't even have IP addresses...
Do you know what you're doing?.."
Many ISP's are reluctant to even let a level 1 tech into their systems
even when they are having a problem. Of course, woeful lack of knowledge
such as this is not always the case, but it does happen way too often and
most everyone who owns TC hardware has a funny (or not so funny) tech
support story to tell. And stories get posted.. Word gets around.. Our
point is that this is not conducive to good business growth, is bad
marketing, and is bad for your corporation. Many ISP's live and die by
the fact that "word of mouth" is the most potent (and cost effective) of
all forms of advertising.         

Issue#4 MPIP on the HiperARC

MPIP is a "must have" service when providing ISDN dialup service
for everyone but the smallest shops with only one Netserver or
HiperARC. There is an overall lack of interoperability with ISDN,
and lack of testing for it to be 'solid'. Universal ISDN connect
is much slower than competing vendor's NAS for no apparent reason.
MPIP is in no sense reliable at this point on the Netserver and is
nonexistent for the ARC. Not much else to say..
Proposed Fix:
Netserver: (hope it gets better)
HiperARC: Promised mid to late March with the next code release.
Many ISP's are waiting on MPIP to purchase the latest Hiper

Issue#5 Concurrency Problem in Security and Accounting Software

A major problem experienced with the TC equipment is concurrency
control under their RADIUS software implemented in the Security and
Accounting software. When login tracking is enabled, there is the
option of selecting how many concurrent sessions a user can have.
The NAS can be the Netserver or the HiPer Arc. However, the problem
the Security and Accounting software not getting or not recording
disconnects properly. It is an intermittent problem. What happens
is that the software will record someone connecting but when the
disconnect is missed, the software will still show a connection yet
a check of the NAS shows the caller hung up. Thus the caller will
call back in and won't get authenticated because the software says
they have reached their limit on connections.This problem was in
the 4.3 release and was supposedly fixed in the 5.0 release but it
is still broke. The solution is to turn off login tracking, which
in turn disables concurrency checking and certain accounting reports.
Proposed Fix:
Shutoff login tracking         

Issue#6 HiperARC Connect Failures and random reboots

Many HiperARC users are reporting random reboots and random connect
failures to varying degrees. Some have also seen spontaneous reboots
of HiperDSP cards also. ER code has relieved much of the reboot
problem but spontaneous reboots and "Connect Attempt Failures" are
still commonly being reported. Some admins have reported connect
failures at a rate greater than 50% on specific channels only, no
matter how many HiperDSP's cards in the chassis. ISP's are running
the original release of HiperDSP code and are waiting for something
newer and more stable.. with working v.90 that doesn't break our
existing customer base 56k connections.         

Issue#7 Telephone Support and Hold Queue Music

Perhaps no other problem has caused as much of a stir as the
phone support hold queue times and the lack of variety of music
to wait by. Although times are reported to have improved, it
was not long ago that one would wait hours on hold. We hope
queue times continue to improve. However the music has not!
At a glance this doesn't really seem to be issue unless you
have had the fortune to sit on hold with USR for hours..
And there is no easy way for *experienced* users to get past
frontline tech support. The Web ticketing system is not encouraged,
nor used properly. In short, the Internet is still treated
as a "Deluxe BBS" rather than a useful support system. (See Cisco
again) Rogue techs such as Tatai Krishnan and Mike Wronski seem to
realize its usefulness and are to be commended for their efforts,
but the company as a whole does not.
Tech support people should be sharp and prices for support contracts
need not be outrageously priced if responsiveness to tech calls is
not guaranteed.         

Issue#8 Software Quality Control and Support Policy

Access to supporting software requires a support contract, even
for firmware upgrades. And often times, bugs are found in release
code (and sometimes continue through several releases) that really
should have been found and are easily corrected. The overall
feeling sometimes acquired is that paying to be a beta test site
and constantly running ER code to fix major problems is wrong.
Lack of documentation and useful TCM help on new code releases
enhances the feeling. Livingston, Ascend, Farallon, Cabletron,
and sometimes Cisco ALL have free firmware updates.
Notes like "HARM is available on Macintosh and UNIX" (which it isn't),
only contribute to customer frustration. There seems to be a big gap
between technical-writing and software-writing. Online help screens are
unhelpful, if they even have the correct information to begin with.
Online help on Netservers often doesn't match the actual syntax of the
There is a woeful lack of information about new releases of code
either sent to customers or available on the website. Release
Notes are good but even those aren't always available, (li030724.nac
for example) It would be nice to have a revised full manual
available on some basis, even if only in .pdf format.
USR should ship all the software required to administer a chassis
and not force you to download programs that may be hard to get to
because their support contracts have not gotten into their system
quickly enough. Or all the website passwords, x2 enable key passwords,
contract numbers, etc. get bungled or delayed. (if LaChina is not
in, you are out of luck)
In short, customers want quality assurance especially when they have
a support contract. And they want to use released software, not beta
or ER code. Released code should have some semblance of reliability
(MPIP, ISDN, Binary mode telnet, etc.) Existing beta test system
must be utterly lacking since code is almost always released with
blatant bugs and interoperability problems.
- MPIP, ISDN, Binary mode telnet need work.
- NMC classless features need improvement and show no signs of being
worked on.
- PortMux doesn't work correctly for character based telnet sessions.
Keeps putting "@" in front of lines and adding spurious line feeds.
- TCM can be flaky at times when highlighting alot of modems,
resetting, and saving defaults to NVRAM. Highlighting and acting on
individual cards is slower but much more reliable.
- All cards have logical/port level auto response capabilities except
the PRI card. Thus events like downed T1's cannot be acted on
automatically regardless of the problem.
Proposed Fix:
Pay to play. Rely on a mail list instead of extensive documentation
Spend more money on engineering and a little less on marketing and
you will be pleased by the results. One sees fewer Livingston ads
overall than USR/3COM, yet they are USR's biggest competition in NAS.
Please "enable" your engineering staff, whatever that may require..         

Issue#9 Software robustness and multi-platform support

Lack of RADIUS robustness and standards compliance is a constant
nuisance. Many TC admins actually use Livingston Radius - it's free,
and if you own a Portmaster you can get the source code and compile
it/change it for yourself so it always works. TC hubs don't work with
all of the extensions etc. Some have to run scripts to make sure dial-up
users get dumped after idle time or a hard time limit on the hubs,
where Radius handles it on the Portmasters. Your NT product should
support some relational database like MS SQL or Sybase. Using MS Access
is not scalable and doesn't easily allow primary and backup servers
to share the same database. Many have looked at the TC product as
a remote access solution but without RDBMS support, it wasn't scalable
and couldn't be integrated into other security and management systems.
Other issues here include "packet filters" which need improvement, and
half/full duplex issues with the ethernet ports on the Netserver and
HiperARC. The aforementioned issues of OSPF and MPIP support also
deserve a #1 place here.
Many ISP's do not like NT and wish to have another choice of OS when
administrating TC hubs such as linux or bsd..
OK, I have one more personal observation/remark. We purchased a Sparc
workstation just to use unix TCM. However, it came with Solaris
2.6 and we would have to downgrade to 2.5 in order to use TCM.. Windows and
Solaris 2.5 all by themselves are not good enough choices for any code base.
A Windows NT bias is not always good for ISP's. Please consider a
Linux, FreeBSD, BSDI, or a Solaris 2.6 port of Unix TCM.         

Issue#10 All of the Above

Individual instances of problems such as the ones touched on here are
to be expected with any software/hardware system as complex as the TCS.
However, "the whole is more than the sum of the parts" is applicable
here and paints a very disturbing picture for those of us who make
a living hand in hand with the Total Control System. We would just like
to feel more assured that problems are getting the attention that they
deserve. And overall, feel better about our choice to use TCS to host
our services. In return, happy customers will "sing your praises"
which is the best and cheapest form of advertising.         


To a degree, many TCS customers, especially ISP's, have an overall lack
of faith in their chosen NAS vendor. These issues, which many have lived
with for months or years has made many of them leery of moving toward more
TC equipment.
Many agree that USR/3com needs to focus more and offer ISPs better service
as well as stronger/better software support. Consumer demand worked for
getting USR (and x2) on the racks of many ISPs. With v.90, that demand
will no longer carry the weight that it once did. Promotion and pricing
are not the only marketing considerations. Product and service come first
for long haul growth and stability.
The manner in which 3Com responds to ISP demands over the next 12 months
will determine their reputation in the ISP marketplace. If 3COM continues
with their current trends, their favor with ISPs will decline as more
complete and stable alternatives become available.
We hope that this summary has given you an overall idea of the state
we are in at the moment and the desire to improve overall TCS customer
satisfaction. I am sure that many who have contributed to this report
would be happy to help out in any way they can to resolve even some of
these issues. Please let us know if there is anything else we can do
to help facilitate a timely solution to any of the above problems.
With sincerest regards and best hopes, we are,

( Click to see list of contributors to above letter )

The following 3Com response was posted to the usr-tc discussion group on 4/28/98.         

A searchable archive of the usr-tc discussion group is at:
A downloadable archive is available at:

To subscribe,
send a message to with this text in the body:
subscribe usr-tc (if you subscribe, you will receive all messages, primarily dealing with USR Total Control equipment used by ISPs)


April 17, 1998

 Dear Total Control Aficionados:

 Thank you for your thoughtful letter. It is always a pleasure to receive constructive feedback from customers. We have been carefully considering your concerns and have prepared this response. To that end, we have categorized your collective concerns into 3 groups: 1)System stability and performance, 2)Feature robustness, and 3)Customer service. We will attempt to address your concerns in terms of these categories.

 System Stability and Performance

This category is broad, and covers both Netserver and HiPer ARC based Total Control systems. With respect to "Quake Lag" specifically, we recognize the issue that many service providers face. Your concerns have not fallen on deaf ears, in fact we have improved performance, including latency, on Netserver based systems by more than 30% over the last year. At this point, we have run into hard architectural barriers in the Netserver software and hardware which limit our ability to markedly improve throughput and latency further. Given this situation, we will roll out within a few weeks a Netserver to HiPer ARC trade up program which will offer aggressive credits towards the purchase of HiPer ARC's. To date, the promotional programs have been tied to the purchase of HiPer DSP cards. This program will be much more HiPer ARC centric. As most of you know by now, the performance offered by the HiPer ARC platform is significantly better than the Netserver. More details of this trade in program will be available shortly.

 Regarding the stability of the HiPer system, we have "turned the corner" so to speak. It is no secret that the first release of the HiPer Access system modules for Total Control (TCS 3.0) experienced a number of stability issues once deployed in "live" networks. The fact is that the number of bugs identified after the release of TCS 3.0 last Fall was in line with our expectations. However, the number of high severity bugs was much higher than anticipated. This situation was unfortunate, and it has prompted us to evaluate how we handle new product introductions. As a result, we are making procedural and operational adjustments to how we perform integration and testing of new software including our beta testing process. Some of these changes have already been implemented for the TCS 3.1 maintenance release.

 The initial HiPer system release (TCS 3.0) represented a tremendous engineering accomplishment when considering that the two new components for Total Control, HiPer ARC and HiPer DSP, were essentially completely new platforms. The HiPer ARC included a new micro processor architecture which brought us from an Intel 486 platform to an advanced IBM Power PC RISC platform. As you know, this also included a new software architecture which was necessary in order to support a significant density increase, and new features. With respect to HiPer DSP, we completely re-designed the modem supervisor architecture in order to allow us to accommodate multiple modems per DSP system. This was paramount for increasing density while also providing the long term architecture for multi-media applications like VoIP. In addition, we had to completely re-write the DSP and modem modulation code as well. This has allowed us to position the DSP card to be much more than a modem card.

 Given the significance of this undertaking, we expected a rather large number of bugs to be discovered during the System Testing and Beta Testing processes. In actuality, we discovered a very manageable quantity of bugs in HiPer ARC and HiPer DSP during this process. At the time of product release, there were no known "system crash" bugs.

 In addition to the bugs discovered during the system test and beta test cycles, we expected to receive a large number of bugs reported after release. In anticipation of this, we began planing the TCS 3.1 maintenance release. However, what we did not anticipate was the large number of system stability related bugs that were reported after release. This was particularly surprising given that the first two months after release were relatively quiet. In fact, we shipped over 500 HiPer ARC's and well over 5000 HiPer DSP's in that period of time. As it turns out, our customers spent the first couple of months "certifying" HiPer in their lab environments with limited production exposure. Typically, these are the same environments in which our customers perform beta testing. It was not until customers started deploying HiPer in "live" networks that we started to see a rush of additional bugs. In fact, there was a significant increase in bugs reported around the last week in January.

 As soon as we identified the sudden rush of reported bugs, we immediately reassigned engineers that were working on future software releases, and directed them to resolve bugs. Our priority was to establish stability in the HiPer components. This work continued at a fever pitch for about 6 weeks. This included sending key engineers in the field to a few customer locations in order to understand what it was about these "live" networks that caused the platform to behave differently than in the beta networks.

 After careful analysis, we have determined that most of the stability issues related to HiPer ARC and they resulted, generally speaking, from 3 root causes: 1)RADIUS accounting compatibility, 2)Memory fragmentation after several weeks of operation, 3)Un-documented compiler bugs. The nature of these problems were such that they did not manifest themselves during our product quality and stress testing, nor during our beta test process. Once we corrected these three main issues stability increased significantly. While we certainly can not attribute all of the HiPer bugs to the above three conditions, we feel that the major issues are well under control.

 At this point in time, we are confident that we have addressed the HiPer system stability issues that plagued the first release. HiPer ARC and HiPer DSP for TCS 3.1 have been available in beta for approximately one month now. More importantly, the bug fixes that are included in this release have been in the field, in live networks, in the form of engineering releases for even longer. This has allowed us to verify these fixes in real world network environments. In fact, this experience has prompted us to implement a new customer service process whereby we make readily available software patches called "service releases". Service releases include bug resolutions which have had reasonable exposure and run time in at least three customer production networks. This allows us to provide problem resolution to a broad customer base, while reducing significantly the bureaucracy previously associated with processing emergency releases for our customers. If you are still experiencing HiPer stability issues, please ensure that you are using the latest engineering releases of software. The Service Release for HiPer ARC can be found at

 Considering the three main issues described above, the RADIUS related issues could have been, and should have been identified during our testing processes. The memory fragmentation issue would have been extremely difficult to identify during our testing process due to the call volume and duration of time that elapsed before the problem arose. However, there were some architectural design issues that, in retrospect, should have been avoided in the first place. Finally, the compiler issue was a complete surprise. It has been present, but not documented, in the compiler tools for quite some time. We attribute that to bad luck.

 All in all, the TCS 3.0, HiPer experience has caused us to re-evaluate our system development and release process. Some of the key observations we have made include:

  1. We need to better simulate live network environments during the system testing and beta testing process.
  2. We need better, more comprehensive, beta feedback. We feel that beta test guidelines will help facilitate this.
  3. We need to identify bugs sooner in the development process than we have(pre-beta). This will improve the overall efficiency of the testing process, and allow our System Test Group to focus more closely on functionality, resiliency and performance, rather than debugging.

 Understanding this, we have taken several steps in order to improve quality in the system development process. These include:

  1. Re-consider how we manage the beta process, and the test guidelines that we provide to our beta test partners. We will work with them to better simulate their "live" network conditions.
  2. Modify our beta test process to include "early beta" for a sub-set of our beta partners. This would be customers that consistently produce the most comprehensive beta feedback.
  3. We are also planning an "open beta" model whereby once we achieve a certain test milestone, say no known stability issues, we open the beta to all customers. This will provide a broader range of customer feedback.
  4. We have introduced a new engineering team called the Integration and Test Group (ITG). This team is tasked with testing new software builds prior to STG. The goal is to identify and resolve bugs sooner and more efficiently than we do today. This will allow the STG team to focus on functionality and performance testing rather than debugging.
  5. R&D created a new team called the System Engineering group which is chartered with planning the implementation of features across all of the Total Control system modules. This provides R&D with a system design view rather than a component (i.e., HiPer DSP) view. This should help avoid internal compatibility issues that arise during the new feature release process.
  6. Finally, we are evaluating our source code management policies in order to ensure that we do not introduce an unacceptable number of bugs when we integrate various components of the software.

 While it will take time to implement all of these changes, we have already taken some significant steps with the TCS 3.1 release. For instance, the ITG group was formed, and we pursued the idea of "early beta" with HiPer ARC and HiPer DSP. In addition, we would welcome your comments and thoughts here.

 Despite these early problems with the HiPer system, we are very pleased with the performance levels we have achieved. You may recall the Datacommunication remote access concentrator review published last December. It announced that Total Control HiPer Access received the Tester's Choice Award. This was the first high density access concentrator review. Within one Total Control chassis we out performed Cisco by 80% and Ascend by 50%.

 In addition to the Datacommunications review, we have performed extensive testing within our labs. This is part of every system release. During our performance testing, we use a set of standardized test files which range in content from non-compressible to highly compressible data. We then make hundreds of calls and perform FTP downloads of these various files. We then measure the per connection throughput for each file type. The results for the TCS 3.0 HiPer release are shown below. We show a test result for an all digital test and an all analog test. These tests used a Total Control Chassis configured with 14 HiPer DSP's (T1/PRI) and 2 HiPer ARC's.



As you can see, we have very linear performance even under extreme loads (all connections downloading data). You should note that file type 4x04.tst consists on non-compressible data. As a result, throughput of this file type is most indicative of "wire-speed" performance. The above charts show that Total Control HiPer performs at wire speed for both analog (about 4K Bytes/sec) and digital (about 8K Bytes/sec) calls. Similarly, the performance for highly compressed files (file 1x30.tst) was linear with about 9% throughput difference between analog call #1 and analog call #322, and 13% throughput difference between digital call #1 and digital call #322. The most important point is we can scale the HiPer system such that we are positioned to incorporate additional functionality such as encryption and multimedia, without fork lift upgrades.


The following test shows round trip latency measured using PING under varying call load. Essentially, the test is run using an idle system - no file downloading, and then load is increased over time to fully loaded conditions. The ping measurements are then taken as active connections are added.

 These test results show that we have very low through box latency even under a load. This is important for gaming services like Quake and emerging multi-media applications like VOIP.


 Feature Robustness

Specifically, the Top 10 list identifies the need for MPIP and OSPF in HiPer ARC. We realize that these features have been promised for some time in HiPer ARC v4.1 and v4.2 respectively. HiPer ARC v.4.1 was originally expected in March/April of '98. This has been delayed due to reallocation of engineering resources to the TCS 3.1 maintenance release. HiPer ARC stability was our first priority. The good news is that MPIP has been code complete for a few months, and has been undergoing testing at a few alpha sites. Currently the HiPer ARC v4.1 code is being integrated with the ARC v4.0 maintenance release, and system testing will begin within about one week. We have early MPIP performance numbers as follows:

MPIP Configuration Option Active Calls

(FTP download)

MPIP Server Performance

Sustained Registrations per Second

Dedicated HiPer ARC

configured as an MPIP server only


930 per second
Shared HiPer ARC

configured to process calls and act as an MPIP server


(8 T1's)

688 per second

In addition to MPIP, HiPer ARC v4.1 includes nearly all of the Netserver functionality (with a few exceptions like Frame Relay and periodic CHAP), and also adds many features that are not in the Netserver platform. Some of these include:


Regarding OSPF support, this is still planned for HiPer ARC v4.2 (Yes, I know, you have heard this before). At this point, OSPF is code complete and undergoing integration and testing. The following capabilities are planned for this first release. We would appreciate your input regarding this OSPF feature set:


Of course more OSPF capabilities are planned in the second release. We are anxious to obtain willing beta test sites which would help us validate our OSPF implementation in live networks. If you are interested, please let us know. In addition to OSPF, HiPer ARC v4.2 introduces many more advanced features like Frame Relay, IPsec and DVMRP.

 Regarding your concerns about 3Com RADIUS robustness, we clearly recognize the need for increased interoperability testing. Our system testing group uses Merit and 3Com (actually based on a Livingston source code license) RADIUS servers. Our beta process is being altered to ensure expanded RADIUS exposure in our customers' networks. In terms of the concurrency issue, this problem occurs in servers that are using the flat file format rather than one of our various RDBMS options. This is a result of file locking issues with the flat file. Speaking of our RDBMS support. You are correct in that we only support MS Access for NT at this time. We will seriously consider the MS SQL request. We do support Oracle for Solaris v2.6, and Postgres as a freeware RDBMS.


Customer Service

Regarding your concerns about our customer service organization, we offer the following status and history. We hope you feel we are focused in the right areas, and making progress in the right direction. The customer service organization has experienced significant change that goes back several months into the second half of 1997. Following a considerable turnover in the telephone technical staff in late 1997, service levels slipped well below objectives and expectations. As a result, there has been an all out effort on improving support operations and customer satisfaction.

 The customer satisfaction results shown in the chart below are from surveys administered by an outside third party firm. The surveyor calls customers on a daily basis to question them on the quality of 3Com technical support. These customers are selected at random from the closed cases for that day. This produces about 60 responses per week or 250 per month. The chart clearly shows the impact of the changes within the customer service organization.


Since November, trends in customer satisfaction have been positive. Staffing levels have enabled predictable and acceptable queue hold times. The wait time on the 800 number, to speak with an engineer, is down drastically from the several minutes of hold time that occurred regularly back in November. Also, the music played while on hold was recently changed from classical to more contemporary sounds. We have received positive comments on the new music. We hope you agree J .

 Most of the recently hired support engineers have now moved beyond their initial learning curve. New hires typically have at least a four year technical degree, with 3 - 7 years of related experience in the industry, all have had extensive product and process training and most have completed several week of on-the-job call center training.

 Collectively, these actions have translated into the strong positive trend since Q497. The initiatives outlined below should produce increasingly better result and sustain the positive trends in customer satisfaction. In addition, there are several initiatives underway to improve the quality of both voice and electronic support. Some of these improvements are in place now others will be implemented throughout the first and second half of 1998.

 Expanded Coverage

True 24x7 support coverage was made available for premium contract customers in February. These customers now receive live call support after hours and weekends.

 The New Product Introduction and Beta Programs

To ensure customer service readiness within the various service delivery teams, prior to general release, beta customers will be supported by these service delivery teams as well as the beta support teams. This will help develop technical knowledge and validate the escalation and call handling processes for new products. This will help shorten the learning curve of the service delivery teams for new product enhancements.

 3Com MNS Credential

The Master of Network Science program was developed by 3Com to recognize the highest levels of technical proficiency in computer networking solutions, and establish a new standard for the industry.

Separate credentials will be available in five specific area

This recently announced program will be mandatory for the support staff. The support engineers are currently doing the beta testing for the MNS. They will hold credentials in each of the appropriate areas of expertise, and be required to keep credentials current. For more information visit


Call Flow, Escalation and Management Notification Process

These processes are currently under review. The goal is to

Some of the process work is targeted for implementation as early May 1998.


More Customer Service Options

A knowledge based search facility is being developed which will enable better solutions and encourage the use of web and email based tickets. The knowledge base will link directly to the case management system. As problems are resolved , the symptoms and solutions will become immediately available, world wide, for a symptom/ resolution search. Deployment of the knowledge base will begin in May 1998 timeframe and will roll out over the next several months.

 While we understand that 3Com and the Total Control system are not perfect, we always strive for perfection. We certainly have room for improvement, and with the direction of our customers, we feel we are making progress. We are hopeful that we have successfully reaffirmed our commitment to the ISP market space. Thank you in advance for your continued support. I look forward to your feedback.


 Patrick W. Henkle

Total Control Product Line Manager


Home  |  Links  |  Send Feedback  |  Privacy Policy  | Report Broken Link
Legal Page  |  Author's Web Sites   |  Log In 1998-2022 v.Richard Gamberg. All rights reserved.