The research should address what strategies for generating and annotating metadata will increase the accuracy of matched queries and tasks for the given mission. This includes auto-generated data from artificial autonomous agents as well as data annotated by the human user. The research should also address information retrieval techniques that incorporate strategies to select advantageous combinations of modalities (e.g. video, text, images) that can significantly increase the query accuracy, the VoI, while still being aware of network availability and utilization. The network that deploys this methodology is expected to operate in contested and congested environments with intermittent communication links, and therefore the agents will need to take advantage of all short-lived, high-rate communication opportunities if and when they arise.
PHASE I: Explore and design strategies and algorithms that annotate, organize and select metadata and information content queries while being aware of the network conditions and the value of information requirements. Define a framework for intelligent capture of the interactions between human and artificial autonomous agents. Use this framework to share information over wireless heterogeneous network. Demonstrate viability of solution through modeling and simulation.
PHASE II: Develop specification and software implementation of the proposed algorithms and techniques from Phase I. Demonstration of the scalability properties of the proposed solution using a combination of artificial autonomous agents and human agents in wireless mobile network in combination with emulated network. Demonstrate the capabilities using a network of wireless mobile nodes under a relevant outdoor scenario.
PHASE III DUAL USE APPLICATIONS: This research can enhance network capability for supporting intelligence gathering of information in different coalition networks. Users (artificial and human) in these network settings are likely to generate large volumes of content consisting of images/videos. With the metadata based information access, we can significantly enhance the information carrying ability of the tactical network, and then lead to better success in missions. In addition to military applications, manned and unmanned teaming efforts within First Responders and Homeland Security are expected to grow and benefit from the metadata based information access. Envisioned improvements to be provided by this topic in terms of network efficiency and scalability can also be inserted in these applications and thus enable broader use of their capabilities.
REFERENCES:
1. T. Dao, A. Roy-Chowdhury, H. Madhyastha, S. Krishnamurthy, T. La Porta, "Managing redundant content in bandwidth constrained wireless networks," In ACM International Conference on emerging Networking Experiments and Technologies, 2014.
2. Richard E. Mayer., Cognitive theory of multimedia learning. In the Cambridge Handbook of Multimedia Learning, pp. 43-71. Ed. Richard E. Mayer. New York, NY: Cambridge University Press, 2014
3. Richard E. Mayer. The promise of multimedia learning: using the same instructional design methods across different media. Learning and Instruction 13(2):125-139. 2003.
4. Rebecca J. Passonneau, Emily Chen, Weiwei Guo, Dolores Perin. Automated Pyramid Scoring of Summaries using Distributional Semantics. Proceedings of the 2013 Annual Meeting of the Association for Computational Linguistics. 2013.
5. Qian Yang, Rebecca J. Passonneau, Gerard deMelo. PEAK: Pyramid Evaluation via Automated Knowledge Extraction. Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016
6. M. Uddin, H. Wang, F. Saremi, G. Qi, T. Abdelzaher, and T. Huang, “PhotoNet: A similarity-aware picture delivery service for situation awareness,” in IEEE Real-Time System Symposium, 2011.
7. Yi Wang, Wenjie Hu, Yibo Wu, and Guohong Cao, "SmartPhoto: A Resource- Aware Crowdsourcing Approach for Image Sensing with Smartphones," ACM Mobihoc, 2014
8. Y. Wu, Y. Wang, W. Hu, X. Zhang, and G. Cao, "Resource-Aware Photo Crowdsourcing Through Disruption Tolerant Networks," IEEE International Conference on Distributed Computing Systems (ICDCS), 2016.
KEYWORDS: MUM-T, metadata, artificial agents, information network, communication network, routing, wireless network, drone, sUAV
A18-045
|
TITLE: Improved Communications Scheduling in Contested Environments
|
TECHNOLOGY AREA(S): Electronics
OBJECTIVE: Develop a set of modular machine learning algorithms, possibly based on deep learning, which effectively avoid or mitigate interference (Red/Blue/Self) and congestion, in order to schedule reliable communications for Army tactical networks
DESCRIPTION: Military communications waveforms of today are typically Time Division Multiple Access (TDMA)-based. TDMA is a well-understood network access method that enables a group of tactical nodes to communicate amongst each other. Current TDMA scheduling algorithms quickly become ineffective when the communications spectrum is congested and contested. This is because these algorithms are policy driven and have no ability to learn about potential impediments to reliable communications within the operational spectrum. Cognitive techniques are required to reason about the communications spectrum in order to determine when interference and congestion is occurring. These techniques are also required to classify that interference and/or congestion in near real-time. The classification results are fed into the scheduling algorithm so that, as needed, either communications reschedule reliably in a timely fashion to avoid interference, or address the interference/congestion with a robust mitigation technique.
PHASE I: Develop the feasibility and basic requirements of machine learning techniques that can sweep a segment of communications spectrum while learning and recognizing interference and congestion. Develop the initial training set that minimizes signal feature extraction errors, while enhancing communications by recognizing interference, all classes of jammers, and congestion impairments.
PHASE II: Design a TDMA scheduling algorithm that takes cues/inputs from a machine-learning algorithm. The machine-learning algorithm exchanges information in a distributed fashion, learning about interference, congestion, and jamming across the nodes of a tactical network. Develop seamless integration of the machine-learning algorithm and the scheduling algorithm in various M&S scenarios and demonstrate the capability.
PHASE III DUAL USE APPLICATIONS: Develop and prototype the capability of an integrated scheduling algorithm and machine-learning algorithm on a commercial-of-the-shelf (COTS) Software Defined Radio (SDR) and provide a realistic demonstration of the capability. The demonstration shows near real-time avoidance of interference (EW, self-Interference, co-site, and others) and/or congestion, while reliably scheduling communications amongst networked nodes. This capability can be used in emerging on-the-move tactical (OTM) networks with manned-unmanned Teaming (MUM-T) capabilities for the military, and can be used in emerging commercial self driving vehicle networks.
REFERENCES:
1. Hinton, S Osindero, and Y Teh. “A fast learning algorithm for deep belief nets.” Neural Comput., 18:1527{1554, 2006.
2. A Krizhevsky, I Sutskever, and G Hinton. “ImageNet classication with deep convolutional neural networks”. In NIPS, 2012.
3. David Eigen, Jason Rolfe, Rob Fergus and Yann LeCun: “Understanding Deep Architectures using a Recursive Convolutional Network”, International Conference on Learning Representations April 2012
4. Tzi-Dar Chiueh & Pei-Yun Tsai “OFDM Baseband Receiver Design for Wireless Communications”, Wiley, Asia 2007
5. Dong Yu & Li Deng “Deep Learning and its Applications to Signal and Information Processing”, IEEE Signal Processing Magazine, Exploratory DSP, January 2011
6. CG Constable, “Parameter Estimation in Non-Gaussian Noise” Geophysical Journal, 1988
7. Yao Liu and Peng Ning, “BitTrickle” Defending against Broadband and High-Power Reactive Jamming Attacks”, Infocom 12, 2012
8. “Jamming and Anti-Jamming Techniques in Wireless Networks: A Survey” International Journal of Ad Hoc and Ubiquitous Computing 2012
9. University of Saskatchewan “Signal Constellations, Optimum Receivers and Error Probabilities”
KEYWORDS: Deep Learning, Multi-Layer Neural Network, Time Division Multiple Access, Jamming, Interference, Congestion, Scheduling Algorithms, Communications Spectrum, Tactical Networks, Software Defined Radios
A18-046
|
TITLE: Radar Image-Based Navigation
|
TECHNOLOGY AREA(S): Electronics
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), which controls the export and import of defense-related material and services. Offerors must disclose any proposed use of foreign nationals, their country of origin, and what tasks each would accomplish in the statement of work in accordance with section 5.4.c.(8) of the Announcement.
OBJECTIVE: To mitigate the impact of Global Navigation Satellite System (GNSS) -denied navigation and improve airborne platform inertial (attitude) estimates, develop navigation techniques based on the correlation of high resolution radar imagery to previously-collected radar imagery, optical imagery and/or digital terrain elevation databases.
DESCRIPTION: Terrain reference has been employed for precision navigation for centuries, using visual aids such as significant topographical features and landmarks to establish one’s current location. In a more recent form, e.g. Terrain Contour Matching (TERCOM, [1]), a technique pre-dating GNSS, compares optical features or a sequence of terrain height measurements to databases to determine a platform’s current position. For the latter approach, the system relies on terrain height databases such as Digital Terrain Elevation Data (DTED) as fiduciaries. TERCOM systems are known to perform poorly in areas where there is little / no terrain relief and/or salient optical features depending on the specifics of the implementation.
Synthetic aperture radar (SAR) and real-beam imaging can deliver nearly optical quality, all-weather, day-night imagery in two dimensions [2]. Combined with elevation degrees of freedom, these systems can produce interferograms, which can then be used to yield terrain relief estimates as well [3]. By correlating radar imagery to existing optical or, in the case of three-dimensional, DTED databases, imaging radar can assist in all-weather / day-night navigation.
PHASE I: Identify both radar and reference data to be used to support image-based navigation studies. Develop navigation error models with the appropriate degrees of freedom. Establish quantitative relationships between the quality of reference imagery, the resulting registration (e.g. misalignment), and associated navigation errors. Through analysis and empirical studies using existing radar imagery, establish under what conditions image based navigation works effectively and when it fails. Summarize the performance of the technique under conditions in which the model(s) was (were) tested. Develop plans for a Phase 2 demonstration on operationally relevant imagery.
PHASE II: To demonstrate the efficacy of the capability in previously-untested environments, develop, in C/C++, MATLAB or similar prototyping software, a near-real-time image-based navigation implementation(s). Identify a radar system capable of producing the necessary radar imagery. Informed by results and lessons-learned in Phase 1, develop and execute test plans utilizing the radar to collect data. Demonstrate the algorithms’ efficacy on data collected by the system in near-real time.
PHASE III DUAL USE APPLICATIONS: Implement and integrate an RF image-based navigation algorithm for real-time use on an operationally relevant real-beam or for SAR-based imaging system. Develop and execute test plans demonstrating the efficacy of the algorithm in an operationally relevant environments. Develop and implement plans to effect the transition of the real-time capability to the operational system. Transition path is through Degraded Visual Environment-Mitigation (DVE-M) Science and Technology Objective (STO).
REFERENCES:
1. https://en.wikipedia.org/wiki/TERCOM
2. G. Titi, D. Goshi, and G. Subramanian, “The Multi-Function RF (MFRF) Dataset: A W-Band Database for Degraded Visual Environment Studies”, SPIE D&C, April 2016.
3. Rosen, P. A.; Hensley, S.; Joughin, I. R.; Li, F. K.; Madsen, S. N.; Rodriquez, E. & Goldstein, R. M. Synthetic Aperture Radar Interferometry Proc. IEEE, 2000, 88, 333-382
KEYWORDS: GPS-Denied Navigation, Terrain Contour Matching, Radar Image-Based Navigation
A18-047
|
TITLE: Development of Tools to Derive High Level Language Code Associated with Executable Software
|
TECHNOLOGY AREA(S): Information Systems
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), which controls the export and import of defense-related material and services. Offerors must disclose any proposed use of foreign nationals, their country of origin, and what tasks each would accomplish in the statement of work in accordance with section 5.4.c.(8) of the Announcement.
OBJECTIVE: The task of reviewing programs for possible vulnerabilities is often complicated by lack of availability of the program’s source code. Although it is possible to decompile or convert binary executable code back to source code for some languages, for most languages the process remains either unreliable or not possible with current technology. There are tools that can detect vulnerabilities within binary machine code and it can be changed into assembly language, but it is difficult to verify these findings without being knowledgeable of machine code and assembly language, or without being able to translate the machine code back to source code. It is proposed that a tool or set of tools be developed to expand the ability to revert binary machine code back to source code or to a higher level language beyond assembly language.
DESCRIPTION: There are a number of reasons why source code may be unavailable for systems in use within DoD systems. The reasons include limitations on contract data rights, the use of legacy code for which the vendor is no longer available, and the inclusion of other third party libraries. In these cases, it is still necessary to assess the applications in order to identify potential vulnerabilities and determine their security posture. With current technology, source code assessment can be achieved through numerous tools which parse the code and identify potential defects. Some tools are also available which can parse binary code and identify defects, but these tools do generate some false positives, and further human analysis is required to eliminate those false positives and identify the true security posture. In the case of software without source code, this analysis can be extremely time consuming, and requires specialized skill sets to understand the assembly language generated based on the binary code.
PHASE I: Develop a white paper/prototype which documents a process for developing a robust Automated Tools set that shall recreate a high level source code based on binary software. The tool shall able to reverse engineer multiple programming languages and regenerate code in its original language as developed before compilation. The proposed solution shall regenerate a higher level language code allowing analysts the flexibility to effectively determine the overall security posture of the systems and accurately review the results of findings from binary analysis tools. The solution shall allow assessment of software defects without the need to manually review any lower level languages such as binary or machine code. All assessment will be performed in higher level languages, for 100% of source code regardless of input language.
PHASE II: Develop a working prototype, based on the selected Phase I design which demonstrates capabilities of a tool or set of tools to expand the ability to revert machine code back to source code or to a higher level language beyond Assembly language. The solution shall find potential vulnerabilities throughout the Software Development Lifecycle (SDLC), and recreate high level source code based on binary software where a potential security defect has been identified rather than through problem reports after systems are fielded, sustainment costs can be drastically reduced, and system readiness drastically enhanced. The solution shall identify vulnerabilities in source code which are associated with the Common Weaknesses and Exposures (CWE™) list. Upon identifying these defects, source code shall be generated in a higher level language for 100% of the defects. The generated source code shall coincide with the full function or module in which the defect was identified, and shall be generated regardless of the original language that the code was developed in.
PHASE III DUAL USE APPLICATIONS: In conjunction with Army, optimize the prototype created in Phase II. Implement a Robust Tool for which can recreate high level source code for test and evaluation, using commercially available technologies. The implementation should ensure that the system is interoperable with existing system of systems. Perform steps required to commercialize the system.
REFERENCES:
1. Klocwork, "Developing Software in a Multicore and Multiprocessor World," Ottawa, ON, 2010.
2. G. McGraw, Software Security: Building Security In, Addision-Wesley Professional, 2006.
3. "Comparative Study of Risk Management in Centralized and Distributed Software Development Environment," Scientific International (Lahore), vol. 26, no. 4, pp. 1523-1528, 2014.
4. G. Vasiliadis, M. Polychronakis and S. Ioannidis, "GPU-Assisted Malware," International Journal of Information Security, vol. 14, no. 3, pp. 289-297, 2015.
5. M. Atighetchi, V. Ishakian, J. Loyall, P. Pal, A. Sinclair and R. Grant, "Metronome: Operating System Level Performance Management via Self-Adaptive Computing," in Proceedings of the 49th Annual Design Automation Conference, 2012.
KEYWORDS: Cyber Security, Commercial Off The Shelf (COTS), malicious, vulnerabilities, Software Development Lifecycle (SDLC ), Binary Analysis, Source Code, False Positives, High level Language Code
A18-048
|
TITLE: Alternate GPS Anti-Jam Technology
|
TECHNOLOGY AREA(S): Electronics
OBJECTIVE: To develop alternative Global Positioning System (GPS) anti-Jam technologies to improve performance on the degrees of freedom (exceed N-1 limit) and provide reduced size, weight and power, and cost (SWaP-C).
DESCRIPTION: The current state-of-the-art in GPS anti-jam technology relies heavily on antennas that consist of multi-element arrays and a processing unit that performs a phase-destructive sum of any intentional and unintentional interference signals in the GPS band. The protection that these technologies provide is limited to the number of individual elements contained in the antenna array. That is, if an antenna array contains N elements, then it is limited to attenuate interference sources coming from N-1 distinct directions of arrival. If this limitation is exceeded, the GPS signal will rapidly degrade and become buried in the noise. In order to overcome these limitations, GPS anti-jam technologies that don't rely on multi-element antennas is desired. Alternative technologies may include, but are not limited to, various hardware and software solutions such as antenna masking, power limiters, and advanced signal processing techniques.
PHASE I: The purpose of this topic is to have companies provide CP&I PNT innovative, non-traditional means of achieving anti-jam GPS technology. Explore alternative anti-jam technologies that demonstrate improvements on performance on the degrees of freedom (exceed N-1 limit), which limits the current anti-jam technology using multi-elements array. The proposed alternative solution should also address reduction in SWaP and especially cost of anti-jam technologies (today’s anti-jam antennas are very expensive). The final product for Phase I will be a specification for the Phase II prototype and a technical report providing details for the tradeoff studies that were performed.
PHASE II: Refine and optimize anti-jam technology from Phase I and develop, build, and demonstrate the prototype anti-jam device based on a non-traditional approach. A test report detailing the results of the Phase II prototype demonstrations will be delivered.
PHASE III DUAL USE APPLICATIONS: Transition technology to the U.S. Army. Integrate this technology into Army mounted and/or dismounted platforms as well as commercial applications that require anti-jam capabilities.
REFERENCES:
1. I. Gupta, I. Weiss, and A. Morrison “Desired Features of Adaptive Antenna Arrays for GNSS Receivers”, Proceedings of the IEEE, Vol. 104, Issue 6, June 2016.
2. J. Adam, and S. Stitzer, “MSW Frequency Selective Limiters at UHF”, IEEE Transactions of Magnetics, Vol. 40, No. 4, July 2004.
3. B. Qiu, W. Liu, and R. Wu, “Blind Interference Suppression For Satellite Navigation Signals Based On Antenna Arrays”, IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP), July 2013.
KEYWORDS: Anti-jam, GPS, advanced signal processing techniques, filters, interference suppression.
A18-049
|
TITLE: Predictive Visualizations to Aid Rapid Decision Making
|
TECHNOLOGY AREA(S): Human Systems
OBJECTIVE: To provide Tactical Commanders with an improved method of automatically visualized data tailored to human cognition for prompt and efficient decision making. A focus on reliability, as opposed to complexity, would provide a more approachable user experience for Tactical Commanders expecting concise results in rapid assessments.
DESCRIPTION: Tactical Commanders require improved, tailorable and automated data visualization approaches in the Command and Control, Communications and Intelligence (C3I) domain to help them grasp and take advantage of key information in both rich and sparse data environments. For data to be useful and actionable, investments must be made in analyzing that information and communicating it in a way that is easy to use and practical. This requires analysis and development of mechanisms that enable non-data specialists to understand and use data.
Data visualization approaches should consider human cognition and current context, and should adapt to fit the commander’s thought process and decision-making horizons. An effort is needed to study, define, and categorize a reasonable set of military decisions that can be improved with new data visualization approaches. The paper A Showcase of Visualization Approaches For Military Decision Makers proposes a conceptual model to capture and implement advanced data visualization. This model, with some of the detail shown below, is meant only as an example (other models can be proposed) of how new data visualizations can be described and prototyped.
Dostları ilə paylaş: |