found this at Darpa's website. haven't read it all yet. posting for near-future reading and reference.

The Defense Advanced Research Projects Agency (DARPA) often selects its research efforts through the Broad Agency Announcement (BAA) process. The BAA will be posted directly to FedBizOpps.gov, the single government point-of-entry (GPE) for Federal government procurement opportunities over $25,000. The following information is for those wishing to respond to the Broad Agency Announcement.

The Information Processing Technology Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA) is soliciting proposals to develop an ontology-based (sub)system that captures, stores, and makes accessible the flow of one person’s experience in and interactions with the world in order to support a broad spectrum of associates/assistants and other system capabilities. The objective of this "LifeLog" concept is to be able to trace the "threads" of an individual's life in terms of events, states, and relationships.

Functionally, the LifeLog (sub)system consists of three components: data capture and storage, representation and abstraction, and data access and user interface. LifeLog accepts as input a number of raw physical and transactional data streams. Through inference and reasoning, LifeLog generates multiple layers of representation at increasing levels of abstraction. The input data streams are abstracted into sequences of events and states, which are aggregated into threads and episodes to produce a timeline that constitutes an "episodic memory" for the individual. Patterns of events in the timeline support the identification of routines, relationships, and habits. Preferences, plans, goals, and other markers of intentionality are at the highest level.

LifeLog is interested in three major data categories: physical data, transactional data, and context or media data. “Anywhere/anytime” capture of physical data might be provided by hardware worn by the LifeLog user. Visual, aural, and possibly even haptic sensors capture what the user sees, hears, and feels. GPS, digital compass, and inertial sensors capture the user’s orientation and movements. Biomedical sensors capture the user’s physical state. LifeLog also captures the user’s computer-based interactions and transactions throughout the day from email, calendar, instant messaging, web-based transactions, as well as other common computer applications, and stores the data (or, in some cases, pointers to the data) in appropriate formats. Voice transactions can be captured through recording of telephone calls and voice mail, with the called and calling numbers as metadata. FAX and hardcopy written material (such as postal mail) can be scanned. Finally, LifeLog also captures (or at least captures pointers to) the tremendous amounts of context data the user is exposed to every day from diverse media sources, including broadcast television and radio, hardcopy newspapers, magazines, books and other documents, and softcopy electronic books, web sites, and database access.

LifeLog can be used as a stand-alone system to serve as a powerful automated multimedia diary and scrapbook. By using a search engine interface, the user can easily retrieve a specific thread of past transactions, or recall an experience from a few seconds ago or from many years earlier in as much detail as is desired, including imagery, audio, or video replay of the event. In addition to operating in this stand-alone mode, LifeLog can also serve as a subsystem to support a wide variety of other applications, including personal, medical, financial, and other types of assistants, and various teaching and training tools. As increasing numbers of people acquire LifeLogs, collaborative tasks could be facilitated by the interaction of LifeLogs, and properly anonymized access to LifeLog data might support medical research and the early detection of an emerging epidemic. Application of the LifeLog abstraction structure in a synthesizing mode will eventually allow synthetic game characters and humanoid robots to lead more "realistic" lives. However, the initial LifeLog development is tightly focused on the stand-alone system capabilities, and does not include the broader class of assistive, training, and other applications that may ultimately be supported.

LifeLog technology will support the long-term IPTO vision of a new class of truly "cognitive" systems that can reason in a variety of ways, using substantial amounts of appropriately represented knowledge; can learn from experiences so that their performance improves as they accumulate knowledge and experience; can explain their actions and can accept direction; can be aware of their own behavior and reflect on their own capabilities; and can respond in a robust manner to surprises.


This solicitation seeks proposals to develop and demonstrate LifeLog system-level capabilities as described in the following tasks:

Task 1. Representation and Abstraction via Reasoning and Inference

The research focus of the LifeLog program is the appropriate placement of transactional and physical data within an appropriate framework of representations and abstractions to make accessible both the flow of the user's physical experiences in the world and the stream of his or her interactions with other entities in the world. For transactional data, this process of representation and abstraction might begin with the association of metadata with each data item (e.g., the header information in an email or the information on the envelope of a physical letter). Physical data streams generally have to be parsed into meaningful “chunks,” such as “saccadic” scenes of video, motion segments in GPS or inertial data, or segments of one person’s speech in audio, and these chunks have to be labeled. The key challenge of LifeLog is to make sense of this ongoing sequence of multi-modal transactions and labeled chunks of physical data, by sorting it into discrete “events” and “states” (whose transitions are marked by events) and “threads” (consisting of sequences of events and states) and “episodes” (with beginnings and ends), and to do this automatically and recursively until an extended episode can be identified and labeled as, for example, “I took the 08:30 a.m. flight from Washington's Reagan National Airport to Boston's Logan Airport.” The representational path from the raw physical sensor inputs to this high-level description includes concepts of walking, standing, and riding, being indoors and outdoors, being “at home,” taking a taxi, and going through airport security. The task can be made considerably easier because LifeLog can also process a “going to Boston” entry in the calendar program, email from the airline telling that the flight is on time, and a phone call ordering the taxi, and can correlate GPS readings to a COTS street map. Beyond the generation of the user’s individual timeline or history, represented as a structure of labeled threads and episodes, LifeLog will be able to find meaningful patterns in the timeline, to infer the user’s routines, habits, and relationships with other people, organizations, places, and objects, and to exploit these patterns to ease its task.

The proposal should describe in detail exactly how the offeror’s LifeLog system will accomplish this process of “tracing the threads” and “telling the story” of the user’s experience. State how physical sensory inputs will be parsed and classified (labeled). Define the metadata to be used for each type of input data. Describe how the representation hierarchy is to be constructed, and how classification of events, states, etc., will be performed. Explicitly address the extraction of patterns such as routines, habits, and relationships. Present an approach for assessing the contribution of each data source proposed to LifeLog system-level performance. Provide a comparison of the relative importance of human knowledge engineering and machine learning components both during system development and when deployed. Discuss the tools to be provided to the user to support the visualization and manual generation and editing of the representational hierarchy.

Task 2: Data Capture and Storage Subsystem

LifeLog must acquire data to capture both the user's physical experiences in the world and his or her interactions with other entities in the world. The specific types and fidelity of data to be captured should be driven by the needs implied by the offeror's approach to Task 1. Physical data is captured by various physical sensors and is stored as multiple data streams in appropriate formats at appropriate resolutions. Transactional data is extracted principally from a number of computer applications. Detectors, recognizers, analysis tools, and heuristics are used to “distill” the data, associating metadata, flagging keywords, and otherwise preparing the data for further categorization in terms of representations at various levels of abstraction. Data capture capability must be adequate to support the development of LifeLog, but should not involve new development of sensors.

The proposal should identify the sources and modalities of physical, transactional, and context/media data to be captured, and also the specific sensors and deployment (e.g., wearable) means to be used for gathering physical data, and the methods to be used to acquire transactional and context/media data. The proposal should identify the data storage components to be employed and provide an estimate of the volume of data of each type to be stored per unit time. Selection rationale for components, including critical specifications and estimated costs, should be presented. LifeLog system integration should be specifically addressed, together with power and endurance issues. Offerors must also address human subject approval, data privacy and security, copyright, and legal considerations that would affect the LifeLog development process. Leverage of existing hardware and software is highly encouraged, and LifeLog should interface to commonly used computer applications.

Task 3. Data Access and User Interface Subsystem

The initial LifeLog prototype implementation must provide a functional Application Programming Interface (API), as well as a stand-alone user data access capability which is envisioned to be a search-engine style interface allowing functions (e.g., less than, greater than, Booleans) of the various metadata parameters. Offerors should propose additional features to enhance the user interface (e.g., timeline displays) and to augment the API to support use by additional applications. The developmental interface should also provide a query capability to enable the user to learn why the system behaved as it did. In addition, the interface should provide intervention tools to enable the user to manually create metadata, assign classifications, and edit the abstraction hierarchy. The capabilities of the proposed access scheme should be described in terms of the flexibility of access queries to be supported (of primary concern) and expected performance, such as response time. Leveraging of existing software is encouraged, since the user interface is not a principal subject of research for LifeLog.

Task 4: Experimentation and Performance Assessment

The successful development of LifeLog will require extensive experimentation to provide both the system and its developers with enough “experience” to be representative of use in the real world. The first LifeLog users will clearly be the developer team itself, and, once a critical initial threshold of capability has been achieved, the results of this use should be documented as longitudinal studies. Operating conditions should not be controlled, and a broad spectrum of both physical and transactional data should be captured over weeks of continuous real-world use. The proposal should address performance assessment over these longitudinal studies, and address the metrics of completeness of the ontology and correctness of the LifeLog’s classification decisions. The LifeLog program also includes a “Challenge Problem” in the form of a system demonstration while taking a trip to Washington D.C. Travel combines physical activity (movement via a variety of conveyances) and a diversity of transactions (email, calendar, financial, itinerary, etc.) over the course of a trip. The Travel Challenge consists of an uncontrolled trip from the user's home to Washington, plus controlled trials involving travel over a government-prescribed course within the D.C. area, each trial lasting less than one day. Each proposer is encouraged to have at least three (3) LifeLog users participate in the Travel Challenge. Proposals should include plans for participation in these experiments, specifically including a plan for measuring the performance of the LifeLog system in terms of correctness and completeness. The performance metric for correctness of system decisions addresses 1) What fraction of events are correctly detected and properly classified in the abstraction hierarchy?; and 2) How capable is the system of learning to improve its detection and classification performance? The performance metric for completeness of the ontology considers 1) What fraction of events require additions to the set of existing representations?; and 2) How capable is the system of learning to add and use new representations? The results of the Travel Challenge will be a major determinant of the scope and course of future LifeLog development, including the exercise of proposed options. Offerors should also propose other challenge activities in addition to the Travel Challenge to demonstrate and assess the richness of the LifeLog representation structure and complexity of the domain (task and environment). Additional metrics should also be proposed.

Task 5: Options for Advanced LifeLog Development

The base efforts solicited by this BAA address critical issues that must be tackled to demonstrate a basic LifeLog capability. However, many other equally critical and challenging issues must be addressed to realize a fully deployable LifeLog (sub)system. Therefore, the proposal may include one or more options to perform additional work addressing relevant technical questions, including but not limited to the following:

* How should the LifeLog system enforce security and privacy, given that different data sources may require different restrictions (i.e., classified, proprietary, privacy act) on each data element, and a given item of data may be acquired from more than one source?
* How should different people’s LifeLog systems interact with each other? For example, if each person’s LifeLog understands only his/her own speech perfectly, how should multiple LifeLogs share information so that each can acquire and store all parts of a conversation?
* How should LifeLog be implemented so that it can degrade gracefully in its access modes, storage resources, and capture capabilities?
* How can the domain of intentionality (plans and goals) above the level of timeline or history be more fully developed so that LifeLog can effectively support the broadest possible spectrum of assistive and training applications?

Proposed options should include a clear statement of the functionality and performance benefits envisioned, and should define metrics to support the assessment of these benefits.


A Spy Machine of Darpa's Dreams
The Pentagon is about to embark on a stunningly ambitious research project designed to gather every conceivable bit of information about a person's life, index it and make it searchable.

The embryonic LifeLog program would dump everything an individual does into a giant database: every e-mail sent or received, every picture taken, every Web page surfed, every phone call made, every TV show watched, every magazine read.

All of this -- and more -- would combine with information gleaned from a variety of sources: a GPS transmitter to keep tabs on where that person went; audio-visual sensors to capture what he or she sees or says; and biomedical monitors to keep track of the individual's health.

scientia est potentia over you!


THERE'S MORE: The idea of committing everything in your life to a machine is nearly sixty years old. In 1945, Vannevar Bush -- who headed the White House's Office of Scientific Research and Development during World War II -- published a landmark Atlantic Monthly article, "As We May Think." In it, he describes a "memex" -- a "device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility."

Minicomputer visionary Gordon Bell, now working at Microsoft, sees his "MyLifeBits" project as a fulfillment of Bush's vision.

There are other commercial and academic efforts to weave a life into followable threads, including parallel processing prophet David Gelernter's "Scopeware" and "Haystack," from MIT's David Karger.

AND MORE: LifeLog may eventually dwarf Total Information Awareness, Darpa's ultra-invasive database effort. But "TIA" could wind up being pretty damn large on its own, with 50 times more data than the Library of Congress, according to the Associated Press.