LLY-DML is part of the LILY Project and focuses on optimization parameter-based quantum circuits. It enhances the efficiency of quantum algorithms by fine-tuning parameters of quantum gates. DML stands for Differentiable Machine Learning, emphasizing the use of gradient-based optimization techniques to improve the performance of quantum circuits.
LLY-DML is available on the LILY QML platform, making it accessible for researchers and developers.
For inquiries or further information, please contact: info@lilyqml.de.
Role | Name | Links |
---|---|---|
Project Lead | Leon Kaiser | ORCID, GitHub |
Inquiries and Management | Raul Nieli | |
Supporting Contributors | Eileen Kühn | GitHub, KIT Profile |
Supporting Contributors | Max Kühn | GitHub |
- Hyperdimensional Properties
- Reflection in Hyperdimensional Computing
- Spheres
- Static and Dynamic Vector Regions
- Global and Local Measurements
The encoder's goal is to transform an input in the form of words into a high-dimensional vector space, including the relationships that the vectors have with each other. In doing so, the encoder goes through a series of steps, including forming spheres and sub-spheres along the high-dimensional vector. A neural network, optimized through NEAT (NeuroEvolution of Augmenting Topologies), reacts to the input tokens and forms individual regions along the high-dimensional vector.
- Input Processing:
- The input in the form of words is passed to the encoder.
- The transformer tokenizes the input and goes through various transformation layers.
- The transformer outputs the individual objects of the sentence as tokens and classifies the relationships between these tokens.
- Processing by the Neural Network:
- The neural network, called Neural HDS (HyperDimensional Space), takes over the data and maps the relationships to the high-dimensional vectors.
- Transfer to the HDC Engine:
- The data from the neural network is passed to the HDC (HyperDimensional Computing) engine.
- The HDC engine takes the neural network data and maps it to the high-dimensional space.
The neural network passes the data it has processed to the HDC (HyperDimensional Computing) engine, which converts them into high-dimensional vectors. These vectors can then perform the following operations:
- Addition of Vectors: Based on their relationships, vectors can be added to create common properties or merge their features. For example, when adding "green" and "apple," the attribute "fresh" is attributed to the apple.
- Entanglement Between Vectors: Individual relationships can be represented along the spheres of the high-dimensional vector. For example, the vector area representing the apple's location is entangled with the tree's vector, creating a tendency towards the tree.
- Bundling of Vectors: Bundling refers to the individual addition, multiplication, or entanglement of individual vectors. This serves to condense information.
- Permutation of Vectors: The individual vector areas are rearranged or loaded with altered values to determine new results.
- Extending the Vector: Pattern elements are assigned to the vector to solve complex questions. The data is further permuted until a suitable concept is found, and a desired output is generated.
In reflection, the data extracted from the high-dimensional space is evaluated and processed. The decoder first measures all necessary data and provides it to the transformer. This formulates the data and checks it for consistency and whether it answers the question posed. If not, the transformer passes the data to the reflection neural network, which permutes the vectors and data in the high-dimensional space.
The objective of reflection is to validate the data and determine whether the posed question is answered. If not, feedback is sent to adjust the vectors.
After the data from the high-dimensional space is passed to the transformer, it checks whether the data can answer the question posed. If not, the critique is transferred to the Neural Reflector Engine as classifications. This engine permutes and processes the respective places in the vectors to achieve the desired result. This process is repeated until a valid result is generated. If a valid result is generated, the transformer formulates the result and outputs it.
A sphere is a multi-qubit system that can have different purposes and structures. The term "sphere" is derived from representing this multi-qubit system as a Q-sphere. All possible states of the system, represented by the number of qubits, can be depicted on the surface of a sphere. Depending on the input data, these different states are activated, giving the multi-qubit system unique properties. The composition of the sphere can vary, so spheres can be created for different purposes.
- Perception Sphere: The task of the perception sphere is to process the data and pass it to the processor.
- Classifying Sphere: The task of the classifying sphere is to measure and categorize the data in a unique form.
The spheres are directly connected and entangled, allowing each to influence the other's data.
The sub-spheres of the individual spheres process tasks specifically and then pass the processed data back to the actual sphere.
- Static Vectors: Static vectors represent the actual space in which all necessary data is located.
- Dynamic Vectors: Dynamic vectors allow certain regions to shift based on the questions posed. They operate based on specific dynamic protocols that can represent the desired state more accurately.
Measurement takes place on several levels:
- Local Level: Spheres and their significance are measured individually.
- Contextual Level: Spheres in their encryption context are measured.
- Global Level: Spheres within the entire system are measured.