Quantum mechanics studies the laws of physics on the level of individual atoms and elementary particles (which are fundamentally different from laws of the conventional physics). Quantum computing studies how those fundamental diferences can be exploited to create faster computers and better information processing devices. This is fundamental research, motivated by the possible applications of quantum computers in longer term future.
It is known that quantum computers can efficiently solve computational problems (e.g. factoring large numbers) which are difficult for conventional computers. Quantum mechanics can be also used to create new, more secure encryption technologies.
The main goal of our quantum computing research group is to design new algorithms for quantum computers. We have invented quantum algorithms for element distinctness problem (finding two equal elements in an array) and search in 2D arrays. Both of those algorithms use a new method based on quantum walks (quantum counterpart of random walks). We have also discovered that quantum computers with very small memory still have advantage over conventional computers.
Second direction of our research is proving lower bounds for quantum algorithms. A lower bound is a statement that any algorithm solving a certain problem must use at least T steps. Lower bounds are important because they allow to show that various quantum algorithms are optimal. Our main research result in this direction is the "quantum adversary" method for quantum lower bounds.
Our methods are widely used by other quantum computing researchers. Quantum walks are one of the most widely used methods for constructing quantum algorithms and has been used for a variety of different problems: graph problems, matrix problems, analysis of physical systems. "Quantum adversary" is the most widely used method for proving quantum lower bounds.
Researchers: Andris Ambainis, Aleksandrs Belovs, Abuzer Yakaryilmaz, Raqueline Azevedo Medeiros Santos, Ashutosh Rai, Aleksandrs Rivošs, Dmitrijs Kravčenko, Nikolajs Nahimovs, Kaspars Balodis, Jevgēnijs Vihrovs, Krišjānis Prūsis, Agnis Āriņš, Jānis Iraids, Mārtiņš Kokainis.
Data warehousing is facing today the challenge of Big Data. New problems are emerging connected with the volume and variety of them. However, traditional data warehousing problems remain open.
Data warehouse evolution problems have been studied in the context of relational database environments. A multi-versioning approach to handle data warehouse evolution has been introduced and a tool to define and execute reports on multiple data warehouse schema versions has been implemented. The tool is based on metadata models that describe data warehouse schema versions at conceptual, logical and physical levels.
Due to the emergence of Big Data technologies and the necessity to perform OLAP-like analysis over Big Data, innovative data warehousing solution over Big Data capable of adapting to evolving user needs and changes in the underlying data sources is being developed.
Developing a data warehouse that fits all requirements of potential users is not an easy task. In the course of research a metamodel for representing information requirements (acquired from user interviews) in a more formal way has been proposed. Also, a semi-automated method for transforming collected information requirements to a conceptual model of a data warehouse has been developed. A tool prototype (iReq) provides input of formalized information requirements and generates candidate conceptual models of a data warehouse to map requirements to design.
Due to the large volumes of data and reports accumulated in data warehouses, report exploration and execution is a tedious and time-consuming task. A data warehouse personalization approach aimed to deliver the data that is most relevant for a user has been developed. A recommendation component based on the implicitly or explicitly defined user preferences on elements of the data warehouse schema was implemented in the reporting tool and evaluated by a set of real users in terms of an empirical study.
Researchers: Darja Solodovņikova, Natālija Kozmina, Laila Niedrīte
To build complex information systems in web enironment, special methods and architectures are necessary. An architecture of a multi-tenant adaptive web-based information system, which employs the idea of using software as a service, is proposed. The adaptation process is based on a comprehensive user model.
A special case of information systems that need adaptation are e-learning systems with a specific user model, e.g. learner model that serves as a basis for the adaptation. Also, specific adaptation methods have to be developed.
Researchers: Aivars Niedrītis, Vija Vagale, Laila Niedrīte, Arnis Voitkāns
Terminology work is multidisciplinary and draws support from a number of disciplines (e.g. logic, epistemology, philosophy of science, linguistics, translation studies, information science and cognitive sciences) in its study of concepts and their representations in special language and general language. It combines elements from many theoretical approaches that deal with the description, ordering and transfer of knowledge.
The terminology work is concerned with terminology used for unambiguous communication in natural, human language. The goal of terminology work is, thus, a clarification and standardization of concepts and terminology for communication between humans. Terminology work may be used as input for information modelling and data modelling.
The principles and methods should be observed not only for the manipulation of terminological information but also in the planning and decision-making involved in managing a stock of terminology. The main activities include, but are not limited to, the following:
- identifying concepts and concept relations;
- analysing and modelling concept systems on the basis of identified concepts and concept relations;
- establishing representations of concept systems through concept diagrams;
- defining concepts;
- attributing designations (predominantly terms) to each concept in one or more languages;
- recording and presenting terminological data, principally in print and electronic media (terminography).
Objects, concepts, designations and definitions are fundamental to terminology work. Objects are perceived or conceived and abstracted into concepts which, in special languages, are represented by designations and/or definitions. The set of designations belonging to one special language constitutes the terminology of a specific subject field.
Researchers: Vineta Arnicāne, Juris Borzovs, Dace Šostaka, Viesturs Vēzis et al.
Laboratory for Perceptual and Cognitive Systems at Faculty of Computing focuses on interdisciplinary research in natural and artificial cognitive systems.
In particular, we are interested
a. Perceptual and cognitive systems of different scale, format and content;
b. Formal and axiomatic representation and modeling of different cognitive systems.
We work on visuo-spatial perception and cognition ranging from small-scale environments (in the scope of visual field) to large-scale navigable environments.
Further, we are interested in languages (natural and formal) and the way they represent cognitive processes, structures, and meaning.
Finally, we are also interested in exploring cognitive systems both in single-agent situations and in situations of distributed and extended cognitive systems where not only additional individuals but also technological tools are involved as complementary parts of cognitive processing.
We use a wide range of empirical (experimental and correlative) and mathematical methods to explore perceptual and cognitive processes in both foundational and applied frameworks.
Researchers: Jurģis Šķilters, Līga Zariņa, Ivars Austers, Liene Viļuma, Linda Apse, Nora Bērziņa, Zeynab Babashova, Gurjit Theara.
The stable trend to lose from one-third to half of students in the first study year of computing studies motivates us to explore, which methods can be used to determine in advance such applicants, who have no change to overcome the first study year.
Researchers: Juris Borzovs, Marina Juzova, Laila Niedrīte, Darja Solodovņikova, Uldis Straujums, Jānis Zuters
Model-Based Model of Cognition (MBMC) includes three principles:
- A model is anything that is (or could be) used, for some purpose, in place of something else. In this definition, models are meant to be concrete systems that serve as replacements of concrete target systems.
- Models are the ultimate goal of all kinds of cognition (scientific and non-scientific). Humans and robots need models to manage what is happening in the world around them.
- All kinds of cognition should be assessed as useful, first of all, as the production of models and means of model-building. Means of model-building can be further subdivided into theories, methods, heuristics, etc.
To show the productivity of MBMC, the principal aspects and principal events of the history of computer science and other branches of science will be reconstructed in terms of model-building.
Researchers: Kārlis Podnieks
New possibilities are researched in large scale data set analysis and visualization for new type of hardware – high resolution displays wall, consisting of many (more than 20) standard displays. In this research client-server environment is developed. This environment supports agent based modeling and relational data exploration and migration to NoSQL database with browser, that works with display wall.
First stage of research was devoted to create a prototype of display wall. Main research problems were compatibility with popular operation systems and to keep cost of display wall as low as possible. Different solution architectures were analyzed and display wall prototype was developed partially according to raised requirements.
Second stage of the research is devoted to improvement of the prototype of display wall. Main research problem is to optimize amount of data transferred between computer and display wall. Different software and hardware compression methods are explored. One of possible usage of the display wall was explored – development of agent based modeling and simulating environment.
Researchers: Guntis Arnicāns, Ģirts Karnītis, Rūdolfs Bundulis
Computerized system analysis and operation correctness evaluation during runtime in operational environment is understood as runtime verification in this research. Correctness evaluation can be done by tools built into system or by system events external monitoring. This research focuses on the last one. Verification is done according to each processes’ verification description – model, where is defined events that confirm correctness of each process step, their execution sequence and execution time restrictions.
Prototype for runtime environment controlling system was developed in the first stage of the research. It fixes runtime environment events and via autonomous agents sent them to the controller. Controller monitors environment events and verifies them according to verification model.
In second stage of the research prototype developed in first stage was used for real life business process verification to measure additional workload to information system added by runtime verification process. Obtained measurements show, that additional workload for information system is negligible. It shows practical usability of proposed business process runtime verification mechanism.
Researchers: Jānis Bičevskis, Ģirts Karnītis, Zane Bičevska, Ivo Odītis
The seminar is focusing on different approaches to creating and selecting the content for Computing in schools. During seminars the curriculum for Computing in schools is analyzed and developed. New methodological support is being created for the Latvian educational institutions.
Researchers: Viesturs Vēzis, Ojārs Krūmiņš, et al.