• The Dark Overlord: A New Cyber Threat Puts Schools at Risk

        Nov 08 2017, 3:35 PM

        by khaley 1

        The education sector is learning the lessons of weak data security the hard way: Cyber thieves have attacked more than three dozen school districts this year, exploiting poorly-defended systems to steal valuable information or take over their networks and hold them for ransom.

        It’s a familiar problem. The education field is seeing above average phishing attacks, malicious malware and SPAM than other sectors in the economy, according to Symantec’s 2017 Internet Security Threat Report - Government. No surprise, then, to learn that information security led the EDUCAUSE 2018 Top 10 IT Issues list for the third year in a row.

        But as malicious hackers continue to target school districts, the US Department of Education is now warning of an even more insidious form of cyber-extortion.

        New Challenge Looms

        Schools have previously been targeted by ransomware attacks, where malicious hackers encrypt an organization’s data and hold it hostage until they get paid. In this latest scheme, attackers flat out steal data and then try to sell it back to their victims. Unless the ransom gets paid, the attackers threaten to sell the purloined names, addresses, phone numbers and other student data.  

        As a way of applying added pressure on the schools, attackers also send email or text messages to parents and students raising the prospect of violence at their school. In one case, over 20 parents received these kinds of threatening messages. 

        One district was forced to shut down 30 schools for three days as a preventive measure. So far, law enforcement has not deemed any of these threats of violence to be credible. But the psychological damage is real with falling attendance at the targeted schools. Meanwhile, news of these incidents has resulted in copycat incidents leading to bogus threats to disrupt other schools. 

        The criminal gang behind these attacks calls itself the Dark Overlord. They have been described as foreign, but at least one member of the group has an excellent command of English. Most likely, the group is comprised of multiple members; at least one hails from an English-speaking country. These attackers have previously targeted healthcare organizations, movie studios and manufacturers. 

        Fighting the Dark Overlord

        What can you do to blunt the threat posed by the Dark Overlord?  First, don’t pay the ransom. There is absolutely no guarantee that the fraudsters will release data you pay to set free. You can’t trust criminals.

        Secondly, keep these attackers out of your school from the start. And that means stepping up the work of securing your network and the data that resides on it. The Department of Education just issued some pretty good advice.  They suggest:

        • Conducting security audits to identify weaknesses and update/patch vulnerable systems;

        • Ensuring proper audit logs are created and reviewed routinely for suspicious activity;

        • Training staff and students on data security best practices and phishing/social engineering awareness; and

        • Reviewing all sensitive data to verify that outside access is appropriately limited.

        Also, the FBI has spotlighted the practice where some attackers use anonymous FTP servers - most likely set up earlier and then forgotten by IT organizations - to gain access to an organization’s network. Unless there is a legitimate need to keep those servers in your organization, disable them now.

        You don’t need to be afraid of cyber attackers. They may be evil, but they are not evil genius. They simply take advantage of mistakes we make. But we can fix the errors. It just takes diligence to follow best practices, put good security practices and products in place, and to be prepared.  

        • Thought Leadership
        • Products
        • Private Certification Authority Service
      • Certificate Authority Authorization Checking: What is it, and why should you care?

        Aug 30 2017, 6:12 PM

        by Lee-Lin Thye 0

        Certificate Authority Authorization checking: what is it, and why should you care?

        The Public Key Infrastructure (PKI) ecosystem relies on root certificates issued by various certification authorities (CAs) like Symantec. This is what browsers use to decide which websites can be trusted, and which ones are not trusted.

        Up to now, any CA can issue a TLS certificate for any domain. That’s how the system works, and it’s good in the sense that it gives website owners and operators options to change CAs at their discretion. The downside to this is that certificate issuance can happen without the knowledge of website operators, either by mistake or intentionally by malicious actors.

        A number of technologies have been created in an attempt to highlight instances of “unknown” issuance, such as Certificate Transparency. These have been effective in making the internet a safer, more trustworthy place but they are reactionary measures – only .allowing website operators to address the issue after it’s happened.

        But is it possible to prevent certificates from being mistakenly or inappropriately issued? Yes. Enter: Certification Authority Authorization (CAA).

        CAA prevents unknown certificate issuance by:

        1.Allowing domain owners to specify which CAs are authorized to issue certificates for their domains; and

        2.Giving CAs the ability to check this authorization before issuing a certificate.

        In this article, we’ll explain how CAA works, and why making CAA checking mandatory is a good move for both customers and CAs.

        What is Certification Authority Authorization?

        A Certification Authority Authorization (CAA) record is a DNS Resource Record which allows a domain owner to specify which CAs are authorized to issue certificates for their domain(s) and, by implication, which aren’t.

        The idea is that a CA will check the CAA record(s) for a domain before issuing a certificate. If it finds that a domain has no CAA record, then it’s free to issue a certificate for it if all other authentication checks succeed. However, if it does encounter one or more CAA records, then the CA can only issue a certificate if it’s named in one of the records, indicating that it is authorized to issue a certificate for that domain. The whole process is designed to prevent CAs from unauthorized certificate issuance requests by unauthorized parties or bad actors.

        Sounds great. Why isn’t everyone doing this?

        Symantec has been checking CAA records for years, but it’s not a common practice. There are two reasons why CAA checking isn’t widely practiced:

        1.Many domains don’t have a CAA Resource Record; and

        2.Checking CAA records is not mandatory.

        Because it may take some work to create a CAA record, it’s a matter of customers or website operators consciously opting-in, not opting-out. Many domain owners use a DNS hosting provider and CAA is not yet supported in some DNS implementations.

        This is why CAA records are expected to be used by most high-value domains. These enterprises keep CAA records for their domains because they limit inappropriate (or malicious) certificate requests, and makes it easier to enforce company policies i.e. only using a particular set of CAs.

        The limitations of CAA checking

        Of course, CAA checking has its limitations.

        First, a newly-issued CAA record does not invalidate any previously-issued certificates that may have been issued by a different CA than the one named by the domain owner. Second, it doesn’t flag whether a certificate presented by a web server is a legitimate certificate for that domain.

        Furthermore, in order for CAA checking to be effective, all CAs need to be doing it; it doesn’t work if only one or two CAs are checking CAA records as matter of process. CAA checking must be widely adopted if it is to serve its purpose, but the good news is that more than ninety percent of CAs (who are members of the CA/Browser Forum) are in favor of it.

        The times are changing: CAA checking will become mandatory

        In February 2017, the CA/Browser Forum passed a ballot (on which Symantec voted in favor) requiring all CAs (even those who aren’t a member of the Forum) to check for a CAA record as part of the certificate issuance process for each domain. In accordance with RFC 6844, CAs can no longer issue a certificate for a domain unless:

        1.The CA does not find any CAA records for the domain

        2.The certificate request is consistent with the applicable CAA Resource Record(s)

        The rule is effective as of 8 September 2017. You can read the ballot in full here.

        A good outcome for all companies

        Mandatory CAA record checking requires CAs to abide by the rules set out in specific CAA records, giving domain owners more control over certificate issuance. This makes it easier for companies (especially larger ones) to enforce a certificate issuance policy across business units. With CAA records applicable to every domain, a company can specify a set number of CAs, knowing no other CA can issue a certificate to its domains.  This will help reduce the risks of certificate issuance by unauthorized CAs and help create a more secure and transparent online ecosystem.

        For more information on CAA with Symantec Certificates go to Symantec Knowledge Center

        • Products
        • Certificate Authority
        • TLS
        • Thought Leadership
        • CA
        • Symantec Website Security
        • SSL
        • DigiCert Code Signing
        • certificates
      • Threat Isolation: Why You Can Now Browse Without Fear

        Oct 20 2017, 8:33 PM

        by Mark Urban 1

        The battle between malicious hackers and enterprise security practitioners has become an ever escalating arms race.

        Organizations would invest in ant-virus, anti-spam, and host intrusion prevention services to bolster their security. And it would work - for a time. Attackers reacted by upping their game and started to make progress again. Then, advanced malware sandboxes came along to catch more sophisticated attacks.

        Before long, however, bad actors found new ways to slip their malware past even the most sophisticated network defenses, confounding beleaguered defenders with advanced persistent attacks, spear phishing and other exploits.

        And now cybercriminals have started to use encrypted channels, multi-vector and multi-phased attacks.

        When enterprise security practitioners use forensic tools to conduct breach investigations, they often trace breach sources back to employees who clicked on very clever phishing emails or have been led to a risky website that quickly downloads some zero-day malicious content to their devices. The bad guys have become experts at using techniques like social engineering to trick employees into making security mistakes. It can be subtle – a new, clever web site with a bit of bad JavaScript here, a malicious style sheets there, or maybe a document with just the last fragment malicious payload that activates after a day or two.

        The arms race script will repeat and change in ways we can’t know today.  But we’re looking to drive innovation in a different way – for the good guys.  

        Turning Point in the Malware Battle

        The advent of web and email isolation technology provides enterprises with a powerful tool to seal off their networks from infection, approaching security in a dramatically different way.

        The technology works by positioning itself between the users and the internet so that potentially malicious content gets executed in a secure, containerized environment, “isolating” the user from all code and content, good or bad. It works in the background, so there’s no impact on user experience.  They can interact with the website or the email content as if the isolation process was not even occurring.

        Early adopters in the healthcare, finance, government and telecommunications sectors are already deploying the technology to combat malware-laden threats arriving over the internet. But it is still early in what’s shaping up to be a major transition in the way security organizations fight malware. Indeed, Gartner, which included web isolation as one of the 10 most important technologies in the information security field, expects about 50% of enterprises will adopt isolation technology by 2021.  

        Since most attacks begin with malware delivered either through email, URL links or malicious websites carried over the internet, the very act of moving the browsing process directly from the end-user’s device and isolating it in a network container eliminates the threat of a potential infection.

        “This is a fundamentally different approach where malware can't get to the users any longer,” said Mark Urban, Symantec’s VP of Product Strategy and Operations. “I think this can be a game-changing technology.”

        It’s also why Symantec last week announced an agreement to acquire Israel-based Fireglass, whose leading edge technology creates virtualized websites that let users browse content without having to fear that viruses might infect their devices and corporate networks.

        Fireglass's isolation technology deploys virtual containers which process web browsing sessions remotely. It delivers the end user a “visual stream” that is completely safe from malware. By placing traffic in a cloud or on-prem isolation container, no  ransomware or other malicious content and malware can wind up infecting endpoints or systems.

        “There’s no ability for code or content to reach users,” Urban noted. “It’s just a visual stream. Users can see it, click it, and interact with it just like normal. But nothing actually gets downloaded into their computer or executed into a browser except the visual image, which is harmless. All the HTML, Java, CSS – all the code – gets executed in a safe virtual container.  In some ways, it’s the ultimate protection because bad stuff can’t reach the end user.”

        The computing architecture in web and email isolation serves as a proxy that essentially isolates the users and devices inside the enterprise and carefully manages their connections to the outside world. It applies different technologies that analyze information and content to ensure that malware can’t get into the network.

        “There is no silver bullet. But having a multi-layer approach to detection – with anti-virus scanning , advanced malware sandboxes, and behavioral analytics – is critically important,” Urban said. “ And isolation technology adds the latest high-impact capabilities to the mix, allowing employees to interact with higher-risk sites and emails which in a safe and secure manner.”                                                                       

        Isolation offers organizations a way to strike a balance between IT’s desire to keep their computing environment safe and employees, who need to access information over the public internet. Millions of hosts - domains, subdomains, or IP addresses - pop up every day and many have life spans of less than 24 hours. Many organizations choose to set their Secure Web Gateways to block users from going to types of uncategorized sites because of the risk they represent, even though many are legitimate destinations for business purposes.

        “The age-old challenge for security organizations is to find the right balance between keeping users happy and keeping their computing environment safe,” according to Urban.

        “In a perfect world, these organizations would block everything that’s even a little bit risky, and users would be OK.” he continued, “but in the real world, users do complain and security has to strike a balance between risk and access.” With web and email isolation, Urban added, users can get to the information they need and the business is protected from any threats lurking in the shadows. “The isolation path gives them a lot more flexibility,” he said.

        What Does Fireglass Do?

        The core technology can be delivered on-premises or as a cloud-service. It intercepts and executes web requests in a remote secured environment and will offer users safe access to uncategorized websites, without risk of malware infection, since each website interaction is isolated from the network.  The same isolation benefits hold true for files delivered from the web - users access files through isolation instead of downloading them to their machines.

        Businesses can then let their users interact with these sites and documents to accomplish their tasks, knowing that any malware introduced via these sessions will remain isolated from their network and not infect their environment.

        The upshot: A more open environment, happier users and better threat prevention. Now that’s a winning combination.

        • Products
        • DigiCert Complete Website Security
        • Thought Leadership
        • Symantec Website Security
      • Leveraging Information Driven Product Design to Accelerate Speed to Market

        Apr 16 2017, 12:58 AM

        by Nikon Rasumov 0

        • CIG
        • Cyber Insurance
        • CI Plus Device Credentials
        • IDPD
        • Blue Coat Data Loss Prevention (DLP)
        • Anomaly Detection for Industrial Control Systems
        • Advanced Threat Protection
        • Information-driven product design
        • Advanced Threat Protection Roaming
        • Products
        • CacheFlow
        • Thought Leadership
        • Authentic Document IDs for Brew
        • Advanced Threat Protection Endpoint
        • Certificate Lifecycle Platform
        • Data Loss Prevention and CASB - Symantec DLP Cloud and Symantec CloudSOC
        • Cloud-Delivered Web Security Services
        • Cloud Workload Protection
        • Data Loss Prevention Cloud Service for Email
        • Advanced Threat Protection for Email
        • Anomaly Detection for Automotive
      • A Guide to Multi-Factor Authentication

        Oct 20 2017, 8:50 PM

        by Darla Scott 1

        Today, computers and smart devices are inexpensive enough that we can own many of them: smart phones, laptops, tablets, and even wearable micro devices. Our work and private lives demand portability. This, along with a trend towards moving enterprise servers into the cloud, makes secure user authentication even more imperative…and tricky. That brings us to multi-factor authentication (MFA), what it means, and how it is achieved.

        What Is Multi-Factor Authentication?

        The goal of multi-factor authentication is to create a layered defense of two or more independent credentials: what you know (password), what you have (security token), and what you are (biometric verification). Requiring multiple factors to authenticate a user makes it more difficult for an unauthorized person to gain access to computers, mobile devices, physical locations, networks, or databases; each successive layer should help protect where other layers may be weak.

        Multi-factor authentication is becoming more common, particularly in the financial industry, and is advancing to include retina and fingerprint scanning, voice recognition, and even facial recognition.


        How Does Multi-Factor Authentication Add Security Benefits?

        If only it were possible to develop a single method of authentication that was 100 percent accurate and could not be hacked—we wouldn’t need multi-factor authentication. But passwords can be seen, overheard, guessed, or bypassed; a token can be lost or stolen; and an identical twin or using a photograph may even work to fool biological recognition systems. This is why multi-factor authentication is currently very important to account security.

        The concept of security using multi-factor authentication is that, while there may be a weakness in one authentication factor—say, a stolen password or PIN—the strength of a second or third factor would compensate to provide proper authorization for access.

        What Multi-Factor Authentication Options Are Available for Mobile Devices?

        One-time passwords

        Applications are available which generate one-time passwords in the same way that security tokens have operated in the past. The one-time password is generated and sent to the mobile device using a time-based SMS.

        Using a smartphone or tablet eliminates the need for a user to keep track of a token, and companies incur less cost replacing lost tokens, activating tokens for new employees, or deactivating tokens when an employee leaves.

        Biometric authentication

        Top smartphone manufacturers understand that security is a growing customer concern, and have also started offering biometric authentication to ensure that only the authorized user can access the device. Each of these techniques have advantages and disadvantages.

        Biometric Verification



        Fingerprint authentication Individuals have unique fingerprints Requires integration with network access software
        Voice recognition No extra hardware is necessary Not effective in settings where the user must remain quiet, or with excessive background noise
        Facial recognition or retinal scanning No extra hardware is necessary (when the device is equipped with a camaera) Not effective in low light, and possible to defeat authentication with a photograph

        How Is Multi-Factor Authentication Implemented in the Cloud?

        As data, communication, training, storage, server infrastructure and more are migrated to the cloud, IT admins must deal with the risks of moving beyond the more traditional on-premises server location. Multifactor, random authentication for user access is essential to protect data in the cloud.

        Microsoft, Google, Amazon Web Services, Facebook, and Twitter—among others—all offer two-factor authentication for access to their cloud services, and some are extending to multi-factor authentication strategies.

        Multi-factor authentication for Office 365

        Office 365 requires a password to access applications on PCs, Macs, and mobile devices. The Office 365 admin tool automatically issues a random, 16-character token for users to sign in. When signed in, users are prompted to set up additional authentication.

        • Call My Mobile Phone: When the users receive the confirmation call, they press # in the phone's dial pad to log in.
        • Call My Office Phone: This works like Call My Mobile Phone, but the confirmation call is sent to a separate line, such as a desk phone.
        • Text Code to My Mobile Phone: A code is sent via SMS text message to the user’s phone, to be entered into the Office 365 login form.
        • Notify Me through App: The user can use a Microsoft smartphone app to receive and confirm the notification; the app is available for Windows Phone, iPhone, and Android.
        • Show One-Time Code in App: This uses the same app as for the Notify Me through App option, but sends a one-time, six-digit code that must be entered in the Office 365 login screen.

        Multi-factor authentication for Office 365 using Microsoft Azure Active Directory

        Office 365 with Microsoft Azure Active Directory is an enterprise-level solution that requires users to correctly enter a password, and then acknowledge a phone call, text message, or an app notification on their smartphone to authenticate and sign in.


        What Is the Best Way to Implement Multi-Factor Authentication?

        Using and supporting multi-factor tools requires that IT organizations coordinate and configure the enterprise infrastructure to get protected logins working properly. Most tools include various software agents that can protect VPNs, SharePoint servers, Outlook Web App, and database servers. As more traditional hardware-based onsite servers move into the cloud, most multi-factor solution vendors offer cloud and on-premise options. Customers are choosing offsite deployments more and more because of the support and management flexibility the cloud offers.

        It’s important to evaluate multi-factor authentication products carefully to determine how each one differs subtly with regard to the desired deployment. Not every vendor can handle all scenarios equally well, and this is often a prime factor in product selection. Here are a few questions to ask when preparing to look more closely at multi-factor authentication products for a business:

        1. How much private information does the network handle? If the network currently doesn’t handle much private information, or plan to expand the storage of critical data, it’s probably not necessary to change existing authentication methods.
        2. Who will need to view the reports produced by these products? It’s important to determine who will receive alerts when something goes wrong with the authentication system. Some products can send out alerts whenever anything goes wrong, and most enterprises don't want to get management into a fire drill unnecessarily. 
        3. Does the business require the ability to scale up deployment? It’s important to consider future licensing costs. Most multi-factor products are used to handling tens of thousands of tokens and users, but they can also serve a smaller enterprise.
        4. Who will be among the initial collection of pilot users? This might determine which direction a company takes for securing particular apps and use cases.
        5. Are employees already using the two-factor authentication tools available with some consumer services? If not, enterprises should start spreading the word and making employees familiar with second-factor option on common cloud services. Multi-factor authentication is already built into these services, and it won't cost anything other than a small amount of training time to try them.
        6. How will a password reset be handled in a multi-factor authentication environment? Ideally, any reset or recovery process should be at least as strong as the multi-factor authentication process itself. There should be ‘secret questions’ a user would answer, or an SMS code might be sent to a recognized email or phone number.

        What Are the Obstacles to Implementing Multi-Factor Authentication?

        Making a business case for multifactor authentication clearly requires some advanced planning. There are many use cases for the technology that can be applied in different ways to different parts of an IT infrastructure. Understanding how MFA will be used ahead of time will be helpful when it comes time to selecting a provider.

        Before you begin the task of picking a multi-factor authentication vendor, carefully consider the following possible obstacles to deployment:

        1. If your Active Directory is not lean and accurate, implementing a MFA solution will be a painful way to get there.
        2. If you still use mostly on-premises servers, you might be better off using (or at least starting with) Windows Server's built-in password-strengthening policies. This will allow you to gauge how much resistance there is from users when they have to regularly change their passwords and make them more complex.
        3. If your company has a geographically-distributed staff, with a few people in many cities, it may be difficult to train the user population or disseminate physical key fobs. In such cases, enterprises may want to look into software tokens or software apps instead.

        The Future of Multi-Factor Authentication

        MFA has become a more mainstream option for financial firms and other consumer-facing businesses. In 2014, more than 1800 respondents to a Ponemon Institute survey indicated that their organizations planned to adopt some form of multi-factor authentication, while another 40 percent were considering it. As passwords become increasingly insecure, and as our mobile, cloud-based computing becomes more prevalent, multi-factor tools are finding use in just about every corner of the enterprise, especially where personal information is being consumed. For example, Symantec Validation and ID Protection Service is a highly scalable, cloud-based solution that delivers highly secure multi-factor authentication for enterprises of all sizes.

        • Products
        • Multi-Factor Authentication
        • MFA
        • DigiCert SSL TLS Certificates
        • Thought Leadership
        • Managed PKI for SSL
      • Balancing Certificate Transparency and Privacy

        Feb 18 2016, 7:28 PM

        by Michael Klieman 0

        In my last blog post, I shared that Symantec will have support for Certificate Transparency fully deployed across all of our products and customer-facing experiences in the next few weeks. 

        Certificate Transparency (CT) can help organizations monitor what active SSL/TLS certificates exist for the domains they own – and for many customers and use cases, the current implementation of CT works well. However, in cases where certificates are deployed for internal-only applications, some customers prefer to keep the information for their certificates private (particularly sub-domain information). For example, while a customer may be fine with publishing certificate information publicly for “”, that same customer may understandably object to logging “”. Today, the current Certificate Transparency specification RFC 6962 does not address these privacy concerns or use cases. 

        To handle these practical customer use cases, Symantec’s current implementation of CT logs all certificates by default but provides an option for customers to “opt out” of logging certificates. This approach is clearly not optimal because it creates a gap where all certificates may not be logged – however this is presently the most effective way to address customers’ privacy concerns within the limitations of the current Certificate Transparency specification. 

        Currently, the Internet Engineering Task Force is working on the next version of the Certificate Transparency specification — RFC 6962-bis. This new version will allow for sub-domain information to be redacted from CT logging. Using the case above, a customer will be able to have their certificate for “” logged as “?”. This approach will enable companies to address their privacy concerns while ensuring that all of their certificates are being logged and monitored. 

        Symantec supports name redaction as the best way to address both transparency and privacy and we intend to implement the new specification as soon as it is finalized. 

        Learn more about our support for Certificate Transparency here.

        • Products
        • DigiCert Code Signing
        • Thought Leadership
        • Certificate Transparency
        • Symantec Website Security
      • Industrial Internet 4.0

        Jul 15 2015, 6:35 AM

        by Brian Witten 2

                    This quick post simply seeks to set context for software leaders hoping to help with the Industrial Internet, or “Industry 4.0” as many say in Europe, just highlighting a few points commonly missed by software leaders first stepping into industrial settings, particularly with the recent multi-hundred billion dollar projections on the size of the market for industrial internet software.

                    Unfortunately, many of us with strong backgrounds in software don’t often realize the scale of time and cost at which most industrial plants operate.  Relining a blast furnace can cost $100M.  In auto manufacturing, each minute of downtime for a manufacturing plant costs $22,000 on average.  That’s $1.3M per hour, nearly three times more expensive than unplanned downtime costs for the average Information Technology (IT) organization.  Some pipelines move $32,000 of oil per minute.  That’s over $1.9M per hour.  In that context, it’s no wonder that plant operations teams often view planned and unplanned maintenance with a bit more intensity than most IT teams.  It’s also no wonder that companies are investing aggressively to optimize systems where a 10% improvement can produce gains of more than $200M per year for typical manufacturing plants.  It's equally clear why "security" means "availability" to these operational teams who have so much need to protect the uptime and integrity of these systems.  That's in direct contrast to traditional Information Technology (IT) teams who often must protect "confidentiality" and "secrecy" at the cost of uptime.  That's an important distinction as manufacturing companies look to carefully leverage these smart technologies to improve their performance.

                    According to many, the past 350 years of manufacturing are marked by three revolutionary advances: the steam engine for generating mechanical power, then electrification of manufacturing, and most recently, digitalization of manufacturing through simple Programmable Logic Controllers (PLC).  Many industrial leaders in Europe believe that they can produce a “fourth” such leap, “Industry 4.0,” by lashing digital manufacturing systems into highly virtualized, decentralized, and modular, plants leveraging interoperable real-time systems to yield “smart” factories which outperform current manufacturing plants by the same degree to which mechanization, electrification, and digitalization have improved manufacturing in centuries past.  Beyond “linear” improvements such as the “10%” mentioned above, such digitally “integrated” plants will have the flexibility and agility to not only keep pace with increasingly nimble competition, but to stay ahead of them.

                    Of course, that connectivity brings both tremendous promise and risk.  Having belabored pipeline explosions and steel blast furnace damage from cyber attacks in past posts, I won’t repeat myself here, especially since Symantec has already given the “Dragonfly” attacks against Western energy companies such great in depth coverage.  However, I will promise here that next month’s blog will propose a path “forward” for security of such next generation Industrial Control Systems (ICS), not only leveraging the cornerstones of security for the Internet of Things (IoT), but also describing how they can be applied to the ICS of the Industrial Internet and Industry 4.0.  In the interim, if you’re impatient, feel free to read up on our latest security solutions for embedded systems at

        For more reading:

        • Products
        • Critical System Protection
        • Symantec Enterprise Security
        • Thought Leadership
        • Device Certificate Service
        • Identity and Authentication Services
        • Security Community Blog
        • Managed PKI for SSL
      • Hospitals Breached via Medical Devices?

        Jun 25 2015, 4:18 PM

        by Brian Witten 5

        Many were surprised to read that extremely sophisticated and expensive medical devices, such as X-Ray machines and Blood Gas Analyzers, had been used as a pivot point in more broadly penetrating IT systems in three hospitals.  Even though general vulnerability of networked medical devices has been well known, these are the first documented cases where such devices were used as pivot points for broader lateral attacks into the rest of the hospital. 

        With such exploitation now reported, I’d like to help “peel the onion” on why such obvious problems have been practically impossible to fix for so long.  Surprisingly, the answer has nothing to do with technology.  Many of these systems actually, believe it or not, run well-known software “under the hood,” such as various flavors of Windows and Linux.  Sadly though, these extremely important machines are almost never updated with the latest security patches.  Such risks aren’t a secret in hospitals.  The healthcare industry has long seen the risks as these devices had previously been infected by malware such as Zeus, Citadel, Conficker, and more.  In fact, some (computer) virus infections have shut down entire hospital departments, required rerouting of emergency patients, or had similar implications on care delivery.

        Of course, any PC in the hospital, just like your laptop, has countless defenses against such malware.  Well-patched machines running effective, up-to-date anti-virus software are well protected against such malware and hacker attacks.   Unfortunately though, for regulatory or policy reasons, hospitals are not allowed to patch medical devices, even medical devices running Windows or other commercial software.  Similarly, hospitals are not allowed to install any additional software on these medical devices, even security software essential for protection.  The original logic stems from good reason.  Medical equipment, including its software, must undergo formal testing and be determined safe for patients.  Changing the software in any way, including patches, or adding software without explicit approval by the manufacturer can change the behavior of the device in ways that could endanger patients.  For such reasons, regulatory restrictions prohibit tampering with medical equipment, even if the tampering is intended to protect the equipment and ultimately protect the patients.

        How big are the risks?   Obviously there is no risk of “banking information” being stolen from an MRI.  However, some of the machines are so vulnerable that they may crash when they experience unexpected behavior.  Chris Eng, VP of Research at Veracode, recently tweeted that an MRI machine crashed when simply scanned for vulnerabilities, or other researchers have reported that a simple SNMP inquiry could “tip over” medical equipment. Of course, not all medical devices are that sensitive, but none of these devices should be so vulnerable.  When a device becomes infected, either as an entry-point, pivot-point, or just as part of a broader infection, we need to be concerned about the potential consequences. Critical system controls may get altered and could result, for example, in an excessive radiation dose from a CT scanner.  Vulnerabilities found in insulin pumps have been shown to be outright lethal.

        Another concerning scenario would be that of a targeted attack on a medical device, for example to harm a specific patient or the reputation of a hospital. Although no such cases have been documented or reported to date, security researchers have demonstrated risks for Pacemakers (Kevin Fu), Insulin Pumps (Jerome Radcliffe) and Infusion Pumps (Billy Rios), the latter resulting in an advisory from Homeland Security’s ICS-CERT and a patient safety communication from the FDA.

        What is being done?  In 2014, the FDA issued guidance to medical equipment makers regarding cybersecurity for the medical devices that they make and sell.  I’m sure we’ll see further guidance, and potentially even enforcement, in years to come.  Device makers need to design in the cybersecurity as well as capability to update devices “in the field,” and need to work with regulators on a process whereby it is easier for such updates to be provided to their customers.  At the same time, hospitals are working on their processes to build a more secure medical device infrastructure.

        Could such a strategy work?  Will it?  Do you like the approach, or does it worry you?  Either way, I’d love to hear your thoughts.  Feel free to email us anytime at and visit us online at

        For more reading:

        • Products
        • Critical System Protection
        • Symantec Enterprise Security
        • Thought Leadership
        • Device Certificate Service
        • Identity and Authentication Services
        • IoT
        • healthcare IT
        • Security Community Blog
        • Managed PKI for SSL
      • Microsoft’s launch of Certificate Reputation

        Apr 17 2015, 9:51 PM

        by Dean Coclin 0

        A few weeks ago, Microsoft launched a new addition to their Bing Webmaster Tools which allows website operators to monitor their web domains to help insure there are no improperly issued SSL certificates.

        This is a great benefit to those owners because:

        1. It’s easy to use and Microsoft monitors this for free

        2. The Certificate Authorities do not need to do anything special. Certificates are automatically monitored by Microsoft

        3. It’s integrated into the Bing Webmaster toolset. There is no need to sign up separately for the service

        4. It works for all types of SSL certificates, not just EV

        However, there are a few limitations today:

        1. This is currently a “preview” and only collects data from users on Windows 10 which itself is currently only in a preview release. Hence the data is limited. However, this will improve with the formal release of Windows 10.

        2. The data that Microsoft is gathering is not made public which prevents the public at large from also seeing the certificates. However, the need being addressed is that of website owners.

        More details are in this Microsoft blog.

        Trust continues to be enhanced in the Browser/Certificate Authority ecosystem (as discussed in this prior blog) and Certificate Reputation is another tool (along with Certificate Authority Authorization-CAA, Certificate Transparency-CT, and Public Key Pinning) along this path.

        • Products
        • Symantec Enterprise Security
        • Thought Leadership
        • Symantec Website Security
        • SSL
        • Identity and Authentication Services
        • DigiCert Code Signing
        • certificates
        • Security Community Blog
      • DV SSL Certificates and Ecommerce don't mix

        Mar 29 2018, 10:31 PM

        by Dean Coclin 0

        Symantec’s just released Internet Security Threat Report shows that cybercriminals have been busier than ever. And social engineered attacks are one vector that continue to see growth due to the likelihood of success. Although the attacks come in different forms, one approach fools unsuspecting users to click a link which takes them to a “look-a-like” website. That imitation site is typically a highly-phished domain, (i.e. Apple ID or a popular bank or credit card site). But now, to prove their legitimacy, phishers obtain Domain Validated (DV) SSL certificates because they know that consumers have been trained to look for the padlock or “https” in the browser URL window. The appearance of this lock further legitimizes the attack and tricks consumers into disclosing their credentials or banking/credit card details.

        There are three types of SSL certificates, each requiring a different level of authentication: DV, OV and EV. Understanding the differences among each SSL certificate type is important to help prevent falling victim to scammers. For example, DV certificates are quick and easy to procure and don’t require any type of information indicating the person trying to get the DV certificate actually represents a legitimate business. Fraudsters often use DV certificates to lure consumers to phishing websites that look authentic but are designed to steal sensitive information. For this reason, doing any type of ecommerce transaction on a DV-only site poses risk. While there are appropriate use cases for DV certificates, it’s important to know how cybercriminals are taking advantage of DV certificates to conduct phishing scams and how to protect against these types of cybercriminal attacks.

        Online shopping isn’t going away. Until the industry requires an OV or EV certificate for e-commerce sites or an easier way to identify the types of certificates, consumers will have to bear some of the burden of combatting cyber risks. Knowing the risks ahead of time, however, is half the battle. 

        • Products
        • Public Key Infrastructure (PKI)
        • Symantec Enterprise Security
        • Thought Leadership
        • SSL
        • Identity and Authentication Services
        • DigiCert SSL TLS Certificates
        • Security Community Blog
      3 pages