DiSSeCt aims to design distributed semantic software solutions and algorithms for continuous exchange of huge streams of data between different partners in specific ecosystems. By converting data to knowledge, and exchanging this knowledge in an intelligent, secure and dynamic manner, personalised and context-aware services can be offered to end-users. DiSSeCt aims to tackle this fundamental research leap to offer software solutions, end-user developer tools & guidelines enabling the leap from Big Data to Big Service.

The below concrete objectives & criteria will be realized:

  • Research prototype for semantic enterprise service exposure:
    Methods and algorithms will be achieved that allow the automatic generation of functional service descriptions of data sets and services using functional semantic service description languages. A service repository will be incepted to disseminate these descriptions for workflow composition. A semantic format and self-learning algorithms will be designed enabling description of service QoS and allow selection of best service matches for a specific workflow based on these QoS parameters and context.
  • Research prototype for scalable processing of streaming data:
    A matrix-style architecture will be designed for scalable processing of event and continuous streaming data, where services register for processes and receive data using static, but configurable wiring. Self-learning grabber/processing components are incepted to convert the stream into a discrete generation of events (to do distributed stream reasoning easily). These components will be combined with Big Data frameworks and self-learning algorithms for service distribution (through dynamic replication or adaptation of scope) to ensure performance. Scalable ontology-based algorithms are designed to achieve real-time data processing, which a) balance reasoning being performed by deployed services or remotely on sensors, b) minimize network overhead and (c) balance load on the services and sensors.
  • Research prototype of a scalable, secure & decentralized functional workflow engine:
    Algorithms will be incepted allowing automatic composition and execution of workflows by using functional descriptions according to enterprise policies and rules. Systematic benchmarking methodologies will be documented quantifying and comparing the impact of different scalability tactics. A rule-based security framework & techniques will be designed to realize workflows acting as pre- and post-conditions in security policies expressing dynamic access control decisions.
  • Research prototype of end-user developer tools for user-friendly management:
    Guidelines, rules of thumb and developer tools are designed, allowing end-user developers to easily publish data & services and specify desired functional workflows and their requirements using the above prototypes.

Above research prototypes will be combined into a general purpose Reference Implementation to support development and life cycle management of services/applications for various use case domains.


Even though the research results will be applicable to multiple application domains, the valorisation perspective and evaluation of the designed reference implementation, incepted in close collaboration with the Advisory Board, focuses first on 2 domains:

  • Integrated provisioning of services for the care sector (eHealth):
    based on adequate and context-aware aggregation of data and services from different actors (care givers, patients, informal caregivers) a more continuous spectrum surrounding care and well-being can be worked out.
  • Support for context-awareness and personalisation in multimodal work/leisure time transportation:
    through consolidation of a plethora of available data sources (traffic situation, train/bus schedules, natural language processing of social media, etc.) a traveler can be given personalised (depending on preferences, agenda, etc.) support in order to travel more efficiently.