Articles | Intelligent Systems Source

Articles

Old Standards for Neural Networks Bring New Risks

Neural network accelerators are playing catch up with AI applications already leaving them behind

As artificial intelligence infiltrates an increasing number of fields, its usage is moving from non-real-time situations to latency-critical real-time applications, led by autonomous vehicles. However, as the growth of AI has happened so quickly, benchmarks have fallen behind. This means teams considering hardware acceleration for self-driving cars, robotics and other real-time applications are using the wrong tools for platform selection. This results in significant costs, power wastage, and systems that are simply not designed to handle tasks like real-time inference.

The team at AImotive has a background in developing benchmarks for high performance graphics. As a result, we understand hardware platforms extremely well. When developing aiDrive, our full-stack software suite for self-driving, we saw a substantial gap in appropriate hardware. That is why we created aiWare. In creating our IP we took the demands of real-time applications, such as aiDrive, into account, not only the benchmarks everyone else uses. As a result, we can achieve far higher performance and lower latency than others, at a fraction of the power-consumption.

The traditional approach to benchmarking neural network accelerators for computer vision is centered around a relatively simple task, handled by relatively simple neural networks: image classification. An image with a resolution of 224 by 224 pixels is uploaded to the network, and the algorithms must successfully identify what is in the picture. Runtime is measured, but latency is not truly an aspect. This is where problems begin, as large input files run through complex neural networks emphasize the inherent flaws of current embedded accelerators.

Benchmarking processes must adapt to accommodate new demanding use-cases, such as self-driving technology. Current accelerators are ill-equipped to handle streams of [...]
By |April 2nd, 2018|Articles, RTC Magazine|Comments Off on Old Standards for Neural Networks Bring New Risks|

Cybersecurity Meets Physical Safety: Shoring-Up Weak Links in Critical Operations

By Deborah Lee James, Former Secretary of the Air Force

Cyber-attacks are a daily occurrence for the US Air Force. In an unfortunate parallel to private industry, Air Force networks are attacked, and defended, thousands of times each week. I know this too well as former Secretary of the Air Force. In my current role, I also am well aware that operators of public as well as private critical networks are forced to pour more time, attention and resources than ever before into computer network security because it is so critical to our nation’s safety and economic vitality.

In light of these costs, and associated high stakes, a clear return on investment is imperative. Such a return is not certain if we train our sights solely on network security. Cyber intruders have shown us repeatedly that many avenues are open to them for launching attacks and compromising critical data. That’s why the Air Force is investing more heavily in operational security, and the private sector cannot afford to fail following suit.

Operational security encompasses the entire portfolio of assets that execute processes, or missions, as directed by software code. These processes might be setting a flight path for an advanced fighter aircraft, or they may direct automated maintenance routines for an HVAC system on a facility where classified operations are conducted.

The Industrial Internet of Things (IIoT), characterized by the vast and complex interconnections of different systems, has opened innumerable gateways to our economy’s many enemies. Today’s cybersecurity for our critical infrastructure — dams, powerplants, industrial complexes — extends barely beyond the network core. Yet surprisingly, and alarmingly, edge devices [...]
By |March 28th, 2018|Articles, COTS Journal|Comments Off on Cybersecurity Meets Physical Safety: Shoring-Up Weak Links in Critical Operations|

Can Artificial Intelligence Forecast Bitcoin?

There has been quite a bit of concern over the years about artificial intelligence and its impact on investment markets. In a phrase, regular traders increasingly feel like they’re at a disadvantage as algorithmic programs that essentially amount to AI learn to make the right trades, and make them quickly. Then again, there are also some benefits for some casual investors. For instance, the popular mobile app Acorns has set itself up more or less as an automated mutual fund, investing people’s spare change on its own according to user preferences for a risky or conservative approach.

What isn’t discussed too much just yet is whether concerns about artificial intelligence in financial investing ought to extend to the burgeoning cryptocurrency market. It’s something that may in fact be worth talking about.



To this point, the biggest supposed link between AI and cryptocurrency has been something of an internet conspiracy theory. The theory is not, as one might expect, that AI programs are determining high volume cryptocurrency trades; rather, it’s that bitcoin might have been created by AI. This makes some intriguing sense on the surface because the creator of bitcoin (as well as the blockchain and by extension all other cryptocurrency concepts) is notoriously mysterious. We don’t have a full, confirmed identity for the figure, so some on the internet believe that there is no figure.

Given that the theory tends to go on to suggest that the AI that created bitcoin is attempting a slow takeover of the world, it’s fairly easily dismissed. It does at least get the conversation started about what role if [...]
By |March 19th, 2018|Articles, RTC Magazine|Comments Off on Can Artificial Intelligence Forecast Bitcoin?|

Designing the IoT into HVAC Products and Systems

Unless you’ve already designed, built, and marketed a connected IoT product, it’s almost impossible to imagine the intricacies that will arise. That’s because IoT interconnectedness—and the multilayered implications of that interconnectedness—affect even the most mundane-seeming product decisions.

by Vinay Malekal, Ayla Networks

HVAC product design has become a game of balance to determine how many new features can be squeezed into an existing product design without breaking it.

That approach won’t work for the Internet of Things (IoT). While HVAC architectures have always been composites of multiple, distributed elements, the distributed elements of IoT products operate less and less independently from the system as a whole.

Think About Interoperability

For IoT products to reach their full potential, they must connect and interoperate with the broadest possible range of other connected products, from a variety of manufacturers, as well as with services such as energy management, weather forecasting and environmental conditions. Your IoT products will also need to reach customers worldwide and support cloud-to-cloud connectivity with various IoT platform, manufacturer, and retailer clouds.

Interoperability is something that has to be designed in from the outset; it can’t be added on later.

Plan how your connected HVAC products will interoperate with other connected products—both your own and those from other manufacturers—as well as with various cloud infrastructures and third-party services. Also consider how your connected HVAC products will interoperate with other technologies, devices, and services that might emerge in the future.

The best way to achieve interoperability is through using open, native libraries and other standards-based solutions. Choose a cloud architecture that is schema-less and agnostic to any particular data types. That way, your connected HVAC products can interoperate with existing clouds and connectivity methods as well as future cloud and connectivity approaches.

Approach Security [...]

By |February 26th, 2018|Articles, RTC Magazine|Comments Off on Designing the IoT into HVAC Products and Systems|

Struggling How to Apply Virtual Reality/Augmented Reality to your Training Needs? Three Guidelines to Make It Work and Nail Down the ROI Target

By Raj Raheja, CEO, Heartwood

At times, technology articles overstate the potential that Virtual Reality (VR) and Augmented Reality (AR) offer for training, as low cost headsets and devices are commercially available and relatively easy to deploy.

By now, all services of the military either have deployed or are the in process of developing VR/AR based training exercises.

The U.S. Navy hopes to save $1 billion by incorporating AR technology into their shipbuilding process (through Newport News Shipbuilding). By adding the 3D component of augmented reality to the traditional 2D approach, shipyard workers can quickly understand and perform tasks like placing studs in a bulkhead or steel panel, saving hours per person/task daily.

On the Operations and Maintenance side, virtual reality training is being deployed for Littoral Combat Ship (LCS) crews, providing critical, real-time feedback.

This is just the tip of the iceberg, although there are some lessons to be learned and caveats to keep in mind, as experience is gained.

Many teams want to start deploying (or trying out) VR/AR-based training without accounting for fundamental training needs and lifecycle considerations. This siloed approach will stall and limit project or program ROI.

Here is an effective roadmap that companies can follow from the start, when deploying visual and immersive training solutions:

1. Plan for Training + VR/AR, not VR/AR + Training

VR/AR must be additive in the training lifecycle, not siloed – as a tech innovation available to only those few with the specific hardware on hand. Approximately 80% of the cost involved in creating immersive VR/AR Operations & Maintenance training content can be re-purposed across many platforms – like web, mobile, and laptops. This [...]

By |February 20th, 2018|Articles, COTS Journal, Special Feature|Comments Off on Struggling How to Apply Virtual Reality/Augmented Reality to your Training Needs? Three Guidelines to Make It Work and Nail Down the ROI Target|

Situational Awareness

By Drew Castle, Vice President of Engineering, Chassis Plans

With the constant emergence of new display technologies in the consumer sector, it is important for program managers and engineers in the military markets to be aware of which and how several of these advances can be incorporated into new military programs to achieve maximum benefit. Additionally, as older programs present themselves for technological refreshes, it is imperative to understand the difference in display specifications from previous decades, and how updated technology provides avenues for their evolution and continued progress.

It is no secret that the amount of data available to today’s warfighter is staggering.  With the number of sources for data ever increasing, from new and more advanced Tactical Data Links, improved mapping and terrain data to advanced weather depiction it becomes even more important to have new and better human interface technology so the information can be quickly digested and decisions be made and acted on.

Modern user terminals have high resolution displays with many enhancements for interfacing with this data.  Major trends such as touch screens and enhanced pointing devices, gesture support, biometric authentication and voice recognition are ways to interact with data quickly and securely.

The transition from cathode ray tube displays to early thin film transistor liquid crystal displays occurred rapidly across myriad industries, and the military was no exception. Terminals with multiple display screens, like the three-screen display shown in Figure 1, are seeing transit case deployment in environments where their larger predecessors would never have been taken. The advantages of smaller overall size and decreased power consumption made the transfer an obvious upgrade for a military with a continued focus on rapid tactical capabilities. Current advances in display technology have seen the [...]

By |February 20th, 2018|Articles, COTS Journal, Special Feature|Comments Off on Situational Awareness|

Elements of a Video Management System for Situational Awareness

By Val Chrysostomou, Curtiss-Wright Defense Solutions

There has recently been a proliferation of cameras and sensors on-board ground and airborne platforms for situational awareness applications. This means there is a growing challenge of how best to provide operators with as much usable visual information as possible while ensuring that the data is readable and actionable in real time. Adding to the complexity of the problem is the fact that the operator is typically limited, because of space, weight and power requirements, with a single display screen.

If the operator has to switch between views to access the information to gain good situational awareness, the result can be delays and an incomplete picture that hinder the mission. This includes alternating between different layers of information from numerous different sensor feeds. The video should be presented to the user in a way that helps them meet their objectives as poorly displayed information can cause confusion that can be detrimental to the mission.

The most effective video management systems (VMS) enable a platform’s crew to control their video options – such as sensor inputs, screen configuration, underlay maps and video recording –directly from their touchscreen display. When a crew member’s display also serves as the VMS control center, complete control of surveillance video comes at the touch of a button. The principal advantages of a VMS are simpler integration and maintenance, reduced cost and higher reliability (simplified inter-unit cabling), and flexibility and scalability for platform upgrades.

A VMS is typically characterized by
  • Video streams from multiple sensors or computers
  • Distribution of video streams to multiple displays
  • Flexible display of video – full-screen or quad/picture-in-picture/picture-by-picture
  • Multi-channel recording capability

Depending on the platform, there may be wide variation in how many sensors are supported. [...]

By |February 20th, 2018|Articles, COTS Journal, Special Feature|Comments Off on Elements of a Video Management System for Situational Awareness|

A few keys to succeed with displays…

By John Aldon, PhD, President, MILCOTS

Integrating a display into a subsystem, whether it is a 24in large display for an operator console or a 15in rugged panel PC controlling a gun system, may seem an obvious and straightforward task. The field experience brings a different feedback, and various factors will influence the performance of a video link, regardless of the display being used. We provide a few examples of real situations and emphasize the best way to anticipate problems, save time and frustration to all parties.

Knowing the video sources

As for all topics, a display manufacturer has to deal with various situations when exchanging with a customer on a new project, and aligning expectations can be a challenge if key points are not properly reviewed upfront. Most customers will not spontaneously disclose much about the architecture of the system the display will be embedded in. Even if a display may seem a pretty simpleitem, some of the systems we deal with, such as a weapon control station, a multi function operator console or a large 55” 4K damage control panel, involve many customer controlled subassemblies that may lean on legacy obsolete video sources. Assuming that the video feed provided by the customer always meets today’s standards is like shooting in the dark: that may work…. But it can also lead to a tremendous amount of time spent afterwards when for whatever reasons, the final performance of the video link falls short of expectations. A recent representative example worth noting was the request for an HD-SDI port as an alternate video input on a 17” display, without mentioning that the video feed was delivering a 30 Hz signal. The video controller planned for this project was [...]

By |February 20th, 2018|Articles, COTS Journal, Special Feature|Comments Off on A few keys to succeed with displays…|

Using Artificial Intelligence to Counter Cyber Threats

By Tim Crosby, Spohn Security Solutions

In a constantly evolving digital threat landscape where firewalls and antivirus programs are considered tools of antiquity, companies are looking to utilize more technologically advanced means of protecting crucial data. Artificial intelligence (AI) is becoming a global warrior against cyber threats as security technologies are incorporating AI programs that utilize deep learning to discover similarities and differences within a data set.

  “AI to enhance our response!” “AI, the future of threat intelligence is here!” “AI enabled threat solutions!” It all sounds great – science fiction for today. AI will make your life easier and eliminate all but the most sophisticated attacks by government-sponsored secret organizations that couldn’t possibly want to target you, right?  The reality is it still requires you, the human responsible for securing the network, to make decisions.   AI, in terms of today’s marketing or social media-driven definitions, is machine learning.  It is the combination of IPS, IDS, firewall, routers, switches, SNMP and logs all feeding data to a SIEM (Security Information and Event Management). The SIEM needs to be loaded with known threat analysis information (behavioral patterns, traits, and malicious software signatures from a subscription service and/or observed/logged behavior) which comes from humans that have identified these in responding to an attack or compromise.  Every attack is new or modified to get around known defenses and still requires people/teams to identify these new attacks.   The latest SIEM 2.0 from AlienVault’s commercial product is approaching 100K known behavioral patterns that are indicators of malicious activity.  As fantastic as this product is, it still requires human interaction. When an event is detected, it produces an alert or triggers positive action that ultimately requires a human to determine if it was an attack, a new [...]

OpenFog Security Requirements and Approaches

TECHNICAL CONSORTIUM PAPER – (This is not an RTC Exclusive) The emerging interconnection among mobile/IoT devices, Fog Nodes and Cloud servers are creating a multi-tier pervasive communication-computing in­frastructure that will one day embody billions of devices and span across elaborate hierarchies of administration and application domains. This novel infrastructure and its operation paradigms will give rise to new security challenges as well as new service opportunities. This paper provides an overview of the security landscape of OpenFog architecture as well as a survey of the functional requirements and the technical approaches currently being discussed in the OpenFog Security Workgroup. As a report of on-going work, this paper aims at stimulating further dialogue on Open­Fog Security and fostering future development of novel technologies and practices.  

                                                                                                                                                              I.  Introduction

With the deployment of Next Generation Mobile Networks (NGMNs), Internet of Things (IoTs) and Edge/Fog/Cloud Computing, the world is undergoing the largest overhaul of our information service infrastructure ever. This will drastically change the ways we live, work, move around, produce goods, provide services, interact with one another and protect our planet… Naturally, along with the foreseeable benefits come the potential problems. Information security and service trustworthiness have long been identified as the pre­eminent issues of our heavy dependency on the global information infrastructure. The pervasive presence of the smart devices and their physical vulnerability heighten our concerns. The increasingly devastating cyber-attacks [1,2] seem to confirm our worst nightmares. The sluggish responses of the product and service vendors towards these vulnerabilities and attacks often leave us feeling helpless. In OpenFog Consortium [3], we firmly believe that by inserting pervasive, trusted, on-demand computing services between the information providers and consumers, we can mitigate security risks and ensure service availability and responsiveness. In [...]