Timon Harz
December 14, 2024
Stanford Researchers Propose SMOOTHIE: Machine Learning Algorithm for Label-Free Routers in Generative Tasks
Stanford's SMOOTHIE algorithm reshapes generative tasks by reducing dependency on labeled data. Discover its potential to transform industries and improve machine learning models.

Language model routing is an emerging field focused on optimizing the use of large language models (LLMs) for various tasks. LLMs are increasingly employed in text generation, summarization, reasoning, and more, with the challenge of efficiently routing specific tasks to the most appropriate model. Achieving this balance between efficiency and accuracy is key to handling complex tasks effectively.
One major hurdle in deploying LLMs is selecting the right model for a given task. Although many pre-trained LLMs are available, their performance can vary significantly depending on the task. Traditionally, determining the best model involves labeled datasets or human annotations, methods that are resource-intensive and limit scalability. This is especially problematic in real-time applications or those requiring a broad range of capabilities.
Current routing methods often rely on auxiliary training or heuristic-based approaches, which depend on labeled datasets to rank or select the optimal model. While these methods can be effective, they are constrained by the availability of high-quality annotated data and the computational costs of training additional models, limiting their wider applicability.

Stanford University researchers have introduced SMOOTHIE, an innovative unsupervised language model routing method designed to address the limitations of labeled data. Drawing on weak supervision principles, SMOOTHIE uses a latent variable graphical model to assess the outputs of multiple large language models (LLMs). By estimating sample-specific quality scores, it routes inputs to the LLM most likely to produce the best results. This approach eliminates the need for labeled datasets, significantly reducing resource requirements.
SMOOTHIE comes in two versions: SMOOTHIE-GLOBAL and SMOOTHIE-LOCAL. SMOOTHIE-GLOBAL estimates quality for the entire dataset, offering a broad performance evaluation of the models. In contrast, SMOOTHIE-LOCAL enhances precision by focusing on a sample's nearest neighbors in the embedding space. The methodology uses embedding representations of observable outputs and latent variables to model discrepancies between generated and true outputs, represented as a multivariate Gaussian. This enables the derivation of closed-form estimators for quality scores. SMOOTHIE-LOCAL also incorporates kernel smoothing to fine-tune quality estimates, ensuring that routing decisions are dynamically optimized for each sample.

SMOOTHIE’s performance was rigorously tested across various datasets and settings. SMOOTHIE-GLOBAL successfully identified the best-performing model in 9 out of 14 tasks. For example, on datasets like AlpacaEval, SMOOTHIE-GLOBAL boosted win rates by up to 15 percentage points over random-selection baselines and by 8 points on SQuAD. The LOCAL variant further outperformed both global and supervised routing methods in multi-task scenarios. On mixed-task datasets, SMOOTHIE-LOCAL improved task accuracy by up to 10 points compared to baseline methods. Additionally, it demonstrated strong correlations between estimated and actual model quality, with a rank correlation coefficient of 0.72 on natural language generation tasks and 0.94 on MixInstruct. Notably, SMOOTHIE’s local routing allowed smaller models to outperform larger ones in several configurations, showcasing its effectiveness in resource-efficient scenarios.
These results highlight SMOOTHIE’s potential to revolutionize LLM routing by eliminating the reliance on labeled data and auxiliary training. By combining weak supervision with advanced quality estimation models, SMOOTHIE enables robust and efficient routing decisions in multi-capability environments. The research presents a scalable, practical solution for improving LLM performance, offering a pathway to broader real-world adoption where task diversity and accuracy are critical.

This research represents a significant breakthrough in language model routing. By tackling the challenges of task-specific LLM selection with an unsupervised approach, it paves the way for more efficient deployment of LLMs across a range of applications. The introduction of SMOOTHIE simplifies the process while significantly improving output quality, highlighting the increasing potential of weak supervision in artificial intelligence.
The SMOOTHIE algorithm, introduced by researchers from Stanford, presents a novel method for routing language models (LLMs) without relying on labeled data. In many AI systems, especially when dealing with large-scale tasks, the quality of output varies based on the LLM used. Traditionally, engineers have selected models for each task based on human-annotated data, but this approach is resource-intensive. SMOOTHIE, on the other hand, aims to optimize routing without requiring labeled data.
This method leverages a graphical model that connects observable outputs from different LLMs to unobserved “true” outputs. By constructing a latent variable model over these embeddings, SMOOTHIE estimates quality scores for each LLM based on the output for a specific task. This process is done unsupervised, making it efficient and scalable. The algorithm identifies the best LLM for each task, out-performing traditional routing techniques that rely on pre-labeled data by up to 10 percentage points in accuracy across multiple tasks.
The core of SMOOTHIE’s approach is its ability to perform routing decisions based solely on the model’s performance on each input, rather than requiring additional external supervision. This makes the algorithm particularly useful in scenarios where labeled data is sparse or unavailable, further enhancing its flexibility and applicability in real-world applications.
For a deeper dive into the methodology and results, you can read the full paper on arXiv here.
The key innovation of the SMOOTHIE algorithm proposed by Stanford researchers lies in its ability to perform generative tasks without relying on labeled data, which is a significant departure from traditional machine learning methods. In typical machine learning workflows, labeled datasets are crucial for training models, especially in generative tasks such as content creation or data augmentation. These models learn to predict outputs based on a large set of pre-labeled inputs, which can be time-consuming, expensive, and sometimes impractical to collect.
SMOOTHIE addresses this challenge by using an unsupervised learning approach. Rather than requiring labeled examples, it leverages unlabeled data to uncover underlying patterns or structures within the data itself. This approach could revolutionize how generative models operate, as it opens the door to creating useful and creative outputs without the need for manually curated datasets. By utilizing these unlabeled datasets, SMOOTHIE effectively learns the dependencies and relationships that are essential for generating content such as text, images, or even synthetic data, similar to what is seen in generative AI models powered by transformers.
What makes SMOOTHIE particularly exciting is its ability to apply these techniques across a wide range of generative tasks. For instance, generative models based on this algorithm could generate coherent and contextually relevant text without relying on traditional labeled corpora, enabling applications in areas like automated content creation, data augmentation for training AI models, and enhancing synthetic data generation. This could dramatically reduce the barriers to entry for developers and researchers, enabling them to generate complex outputs without the typically required labeled training datasets. Moreover, this innovation has the potential to enhance the scalability and efficiency of generative AI by reducing the dependence on large, expensive labeled datasets.
In short, the SMOOTHIE algorithm introduces a paradigm shift by decoupling generative model training from the need for labeled data, thereby making it easier, faster, and more cost-effective to deploy machine learning solutions in generative tasks across various domains.
Background
Generative tasks in machine learning play a pivotal role in creating models that can not only understand data but also generate new, synthetic data. This ability is fundamental for applications in image and text generation, predictive modeling, and even drug discovery. However, the implementation of generative tasks presents several challenges that researchers and practitioners must address.
One of the primary challenges is overcoming implicit assumptions embedded in traditional machine learning models. For instance, many generative models assume that data points are independent of one another, but this is not always the case in real-world applications. For example, in time-series data or data from related individuals, dependencies often exist that need to be modeled explicitly. Generative models must therefore account for these relationships, which can be difficult without introducing additional complexity into the learning process.
Another significant challenge lies in the incorporation of prior knowledge. While large datasets have propelled advancements in generative models, real-world applications such as healthcare and personalized medicine often face the problem of limited data. In these cases, domain-specific prior knowledge becomes invaluable, but integrating this expertise into generative models, particularly in areas like diffusion models and variational autoencoders (VAEs), remains a complex task. The challenge is ensuring that the models can effectively leverage this prior knowledge without compromising the flexibility of the generative process.
Additionally, the lack of causal reasoning in many generative models limits their applicability. Most existing models focus on identifying statistical correlations in data rather than understanding the underlying causal mechanisms that govern those correlations. This can lead to biased outputs or inaccurate predictions in real-world scenarios. By integrating causal models into generative tasks, we could enhance model interpretability, fairness, and robustness, making these models more reliable for sensitive applications like decision-making or policy development.
Finally, as generative models expand to handle diverse data types, such as combining textual and visual data, the challenge of dealing with heterogeneity becomes more pronounced. In specialized fields like healthcare, for example, integrating various forms of data—images, health records, and genomic sequences—requires overcoming issues of data privacy, interoperability, and missing values. This complexity can significantly hinder the development of models that are both accurate and scalable.
In conclusion, while generative tasks have vast potential, their successful implementation requires addressing these challenges through better model design, incorporating domain knowledge, and advancing our understanding of causality and data heterogeneity.
In machine learning, labeled data plays a crucial role, particularly in supervised learning. It serves as a foundation for training algorithms to identify patterns by associating raw data with specific labels, such as categorizing an image of an animal as either "cat" or "dog." However, labeling data can be time-consuming, costly, and prone to errors, especially when it involves large datasets or complex tasks, like medical imaging. Moreover, biases in labeling can introduce inaccuracies into machine learning models.
This is where SMOOTHIE offers an innovative solution. It reduces the reliance on extensive labeled data by using techniques that improve the efficiency of model training with fewer annotated examples. By leveraging unsupervised learning, transfer learning, or active learning, SMOOTHIE enables AI models to learn from raw, unlabeled data or from smaller, curated datasets. This approach alleviates some of the challenges associated with labeling, such as high costs and biases, and can speed up the development of machine learning applications, making them more accessible for businesses and researchers.
Overview of the SMOOTHIE Algorithm
SMOOTHIE (Label Free Language Model Routing) is a novel approach proposed by researchers at Stanford to solve the challenge of routing tasks to the best-suited large language model (LLM) without requiring any labeled data. It offers a way to route tasks to specific LLMs based on their performance on different tasks, without needing human-annotated datasets.
The core principle behind SMOOTHIE is that it uses unsupervised methods to estimate which model will perform best for a given input task. Unlike traditional routing methods that require pre-labeled data or auxiliary models, SMOOTHIE builds a latent variable graphical model that considers the outputs from multiple LLMs. By constructing this model over embedding representations of these outputs, it identifies which LLM produces the best quality results. This is done by calculating a quality score for each LLM based on its output, and the task is routed to the model that receives the highest score.
To break this down further, SMOOTHIE essentially compares the outputs of different models in an unsupervised manner. It uses these comparisons to estimate which model is the most suitable for the task at hand. The model works by building a latent structure that relates observable model outputs to "true" outputs, helping SMOOTHIE determine the best model for each unique input, even in the absence of explicit task labels.
By avoiding the need for labeled data, SMOOTHIE simplifies the process of routing tasks to specific models, and its performance is a significant improvement over traditional methods. In tests, SMOOTHIE has shown to identify the optimal model with high accuracy, outshining other methods by as much as 10 percentage points in task routing accuracy.
This technique has broad implications for the future of machine learning, particularly in fields where model selection is critical, and labeled datasets are scarce or expensive to produce.
The SMOOTHIE algorithm is a novel machine learning method designed for label-free routing in generative tasks. One of the core challenges it addresses is the selection of the most appropriate large language model (LLM) for a given input, especially in tasks where labeled data is unavailable. The methodology behind SMOOTHIE involves a unique approach to estimating LLM quality scores for each sample, without needing human-annotated labels.
Major Components of SMOOTHIE
Latent Variable Graphical Model: SMOOTHIE constructs a latent variable graphical model that links the output embeddings of multiple LLMs to an unknown "true" output. This model relies on the assumption that the true output is unobservable, and the system instead evaluates the quality of LLM outputs based on their embedding representations.
Embedding Representations: The algorithm focuses on embedding representations of LLM outputs. By comparing these embeddings, SMOOTHIE assesses the relative quality of each LLM output for a given input. The embeddings provide a compressed, semantically rich representation of the model’s predictions, which is crucial for estimating the LLM quality scores.
Quality Score Estimation: SMOOTHIE estimates the quality scores for each LLM on a per-sample basis. This is done through a probabilistic model that calculates the expected variance between different LLM embeddings for each input. These scores are crucial for determining which model to route each sample to. The model uses a multivariate Gaussian assumption to estimate the differences between the embeddings of each LLM output.
Routing Mechanism: Once the quality scores are computed, the routing step involves selecting the LLM with the highest score for each sample. This ensures that the most suitable LLM is used for a given task, improving accuracy and efficiency. This routing mechanism is based on the assumption that better quality embeddings are closer to the true output.
Unsupervised Learning: A key feature of SMOOTHIE is that it operates in an unsupervised manner. Unlike other routing methods that require labeled training data, SMOOTHIE relies solely on unlabeled test data to estimate the quality scores. This makes it highly scalable and adaptable to situations where labeled data is scarce or expensive to obtain.
By avoiding the need for labeled data, SMOOTHIE offers a significant advantage over traditional supervised learning-based routing methods. It can be applied in a wide range of scenarios where LLM selection is crucial, such as in natural language processing tasks, without the need for extensive datasets. SMOOTHIE has demonstrated its effectiveness by outperforming other routing methods in several benchmark tasks, making it a promising approach for future AI applications.
Applications and Potential Impact
The SMOOTHIE algorithm, with its ability to handle label-free routers in generative tasks, offers considerable potential in a variety of AI-driven industries, opening up new avenues for innovation and optimization. Here are some key use cases where SMOOTHIE’s capabilities could revolutionize applications:
Healthcare and Medical Imaging: Machine learning algorithms like SMOOTHIE could enhance diagnostic processes, particularly in areas like medical imaging where label-free data is crucial. SMOOTHIE's approach could improve the detection and analysis of diseases by working with raw medical data without requiring labeled datasets. This could significantly reduce the time and cost associated with training models for specific diseases, accelerating early detection and improving patient outcomes. Additionally, the algorithm's ability to process unlabelled data could help in drug discovery, identifying promising compounds more efficiently.
Cybersecurity: SMOOTHIE's potential in cybersecurity is vast, especially for detecting anomalies or new types of malware without needing predefined labels. This label-free approach can enhance intrusion detection systems by identifying suspicious behaviors or attacks that deviate from normal network patterns. The algorithm could also be used to improve fraud detection systems by recognizing new types of fraudulent activities in financial transactions without the need for manually labeled data
.
Autonomous Vehicles: In autonomous driving, SMOOTHIE’s ability to work with raw sensor data could allow for more accurate real-time decision-making without the need for extensive labeling. By processing vast amounts of data from cameras, LIDAR, and radar systems, it can help self-driving cars make split-second decisions based on dynamic, unlabeled environments. This would be particularly beneficial in improving the robustness and adaptability of autonomous systems under varied conditions.
Retail and Customer Experience: In e-commerce, SMOOTHIE could revolutionize recommendation engines by working with unlabelled user data. The algorithm can analyze consumer behavior in real time, offering personalized product recommendations or dynamic pricing without needing explicit data labels, thus providing highly relevant suggestions while improving customer satisfaction and boosting sales.
Climate Change Monitoring: SMOOTHIE's ability to process unstructured data could be harnessed in climate change research. By analyzing satellite imagery or sensor data in a label-free manner, it could provide insights into environmental shifts such as deforestation or changes in carbon emission patterns. This would allow for faster and more scalable climate monitoring without the extensive manual effort required for labeling such data.
Natural Language Processing (NLP): For applications like sentiment analysis, language translation, and content moderation, SMOOTHIE could assist by processing unlabelled text data to generate insights. Its label-free capability would enhance models that predict user sentiment or flag harmful content without the need for manually labeled training sets, making NLP models more adaptable and quicker to deploy.
These are just a few of the industries that stand to benefit from SMOOTHIE’s advanced capabilities. By minimizing the need for labeled data, SMOOTHIE could unlock new efficiencies across sectors, allowing AI to address a broader range of problems with minimal human intervention.
The SMOOTHIE algorithm proposed by Stanford researchers holds significant potential for enhancing machine learning tasks in several areas, especially when it comes to data efficiency and generative modeling. The algorithm's design addresses a crucial challenge: improving how models handle data labeling, which is often time-consuming and costly. By enabling label-free training, SMOOTHIE allows for the creation of robust generative models without the need for traditional manual annotation processes. This can drastically reduce the resource consumption in training, making machine learning tasks more scalable and accessible across different industries, including healthcare, research, and engineering.
From a data efficiency perspective, SMOOTHIE introduces a novel approach to optimizing how information is utilized within generative models. Instead of relying on vast amounts of labeled data, the algorithm improves the utilization of unlabeled data, reducing the dependency on extensive labeled datasets. This is especially critical in fields like bioinformatics and material science, where obtaining labeled data can be extremely difficult and expensive. SMOOTHIE’s ability to handle unlabeled data effectively opens up new possibilities for applying machine learning to complex tasks such as protein folding, materials discovery, and climate modeling, without the bottleneck of needing manually labeled examples.
Generative modeling, a field that relies heavily on creating new, synthetic data from learned patterns, stands to benefit greatly from SMOOTHIE. Traditional generative models often struggle with data efficiency, requiring large datasets to produce high-quality outputs. SMOOTHIE's ability to operate in a label-free context allows for more efficient use of data, enabling the generation of high-quality outputs from smaller or less labeled datasets. This could significantly accelerate progress in fields like drug discovery, where researchers are constantly exploring new molecular structures, or in the design of advanced materials with specific properties, where traditional methods of data labeling are time-consuming and impractical.
In addition, this breakthrough could pave the way for future AI models that are more versatile and adaptable to real-world conditions. As machine learning models become more capable of handling unlabeled data and generating synthetic data from fewer examples, we may see faster advancements in a wide range of industries, from autonomous vehicles to personalized healthcare. Furthermore, the efficiency gains in data processing could lead to more sustainable AI practices, where less computational power and fewer resources are needed to achieve the same, or even superior, outcomes.
In summary, the SMOOTHIE algorithm has the potential to revolutionize how generative models are built and trained, making them more data-efficient and accessible. Its implications for the future of AI are vast, enabling faster advancements in scientific research, improving AI's role in real-world applications, and reducing the environmental and economic costs associated with training large-scale machine learning models.
Comparison with Existing Approaches
SMOOTHIE presents a significant departure from traditional machine learning algorithms that require labeled data for generative tasks, particularly in language model routing. While most existing algorithms—such as supervised models used for classification, regression, or entity recognition—rely on extensive labeled datasets to train auxiliary models that guide task-specific decisions, SMOOTHIE utilizes unsupervised learning principles. It avoids the need for human-annotated labels, instead constructing a latent variable graphical model to infer the most appropriate routing for each task using embeddings from multiple LLMs. This allows SMOOTHIE to dynamically assess the quality of different LLMs for specific tasks without the overhead of labeled training data, which is particularly advantageous for real-world applications where such datasets are costly or difficult to obtain.
In contrast, other algorithms like reinforcement learning or traditional supervised models perform well with clearly defined labels but struggle with the flexibility and adaptability that SMOOTHIE introduces. These conventional methods necessitate pre-labeled data, which can be expensive, time-consuming, and sometimes biased depending on how labels are assigned. Moreover, while reinforcement learning optimizes decision-making through reward-based learning, it also requires a well-defined reward structure, making it less suitable for tasks that benefit from label-free adaptation, as in the case of SMOOTHIE.
SMOOTHIE's label-free approach stands out in tasks involving large and diverse datasets that span multiple domains. By routing each task to the LLM that performs best for the specific input, SMOOTHIE not only reduces the dependency on labeled data but also improves task performance by better utilizing the full spectrum of LLM capabilities. This approach positions SMOOTHIE as a robust alternative, especially for generative tasks where the complexity and variability of input data make labeled training infeasible.
SMOOTHIE, the machine learning algorithm proposed by Stanford researchers, is designed to improve generative tasks by enabling label-free routers. The algorithm’s main advantage lies in its ability to address challenges in generative models that typically require a large amount of labeled data, which can be expensive and time-consuming to collect. By leveraging SMOOTHIE, these models can generate high-quality outputs even in the absence of explicit labels.
Advantages:
Data Efficiency: One of SMOOTHIE’s standout features is its ability to work with unlabeled data, making it more adaptable and efficient in scenarios where labeling is impractical or costly. This is particularly advantageous in fields like generative design, where labeled datasets might not be readily available or feasible to create.
Reduced Overfitting: The algorithm's unique structure reduces the risk of overfitting that often plagues traditional supervised learning models. With traditional methods, as the complexity of the model increases, overfitting becomes a significant concern. SMOOTHIE, however, mitigates this issue by relying on unsupervised learning, allowing it to generalize better from the data it processes.
Scalability: SMOOTHIE is highly scalable, able to adapt to large datasets without a proportional increase in computational demand. This scalability is crucial in real-world applications where datasets can be massive and highly complex.
Versatility: While traditional machine learning methods require significant adjustments and tuning for specific types of data, SMOOTHIE can easily be applied across various domains, from natural language processing (NLP) to computer vision and beyond. Its flexibility makes it an attractive option for a wide range of generative tasks.
Limitations:
Computational Complexity: Despite its scalability, SMOOTHIE’s operations may still require significant computational resources, especially when processing very large datasets. This is a common challenge in machine learning models that aim to handle complex generative tasks. Additionally, the algorithm's reliance on smoothing techniques, which are intended to reduce noise in data, may introduce overhead in certain computational environments.
Model Interpretability: As with many machine learning models, SMOOTHIE can be difficult to interpret, especially when dealing with large, multidimensional datasets. This lack of transparency can be a drawback in industries where understanding the rationale behind predictions is critical, such as in healthcare or finance.
Dependency on Proper Configuration: While SMOOTHIE is versatile, it still requires careful tuning of certain parameters, such as the smoothing factor. Incorrect settings can lead to suboptimal performance, making it crucial for users to have a deep understanding of both the algorithm and their data.
Data Preprocessing: Although SMOOTHIE excels with unlabeled data, significant preprocessing is often still required. The algorithm's performance depends heavily on the quality and structure of the input data, and improper preprocessing could undermine its effectiveness.
Future Research Directions
Further research and optimization for SMOOTHIE's machine learning capabilities could focus on several areas:
Scalability and Efficiency: While SMOOTHIE improves performance on generative tasks, optimizing it for larger datasets or more complex networks could enhance its scalability. Techniques like model pruning and quantization are being explored to address the inefficiencies of large models.
Bias Mitigation: Like many AI systems, SMOOTHIE could benefit from additional research into bias detection and mitigation. This is crucial in making the algorithm more reliable across diverse real-world applications.
Multi-modal Capabilities: Future enhancements might focus on incorporating multi-modal AI, where SMOOTHIE could be trained to handle diverse data types such as text, audio, and images, improving its generative capabilities for a wider range of tasks.
Cost and Environmental Impact: As generative models grow, optimizing SMOOTHIE to reduce training costs and environmental impacts will be a key area for further improvement.
By addressing these areas, SMOOTHIE can continue to evolve and maintain its relevance in the ever-changing landscape of machine learning and AI applications.
The SMOOTHIE algorithm, a novel approach for label-free routers in generative tasks, holds significant promise for transforming a variety of industries, especially in fields requiring large-scale data analysis and real-time decision-making. One of the most intriguing potential real-world applications of this technology is in the healthcare sector, particularly in personalized medicine. By optimizing router capabilities for generative tasks, SMOOTHIE could enable the creation of more precise, individualized treatment plans based on patient data, improving diagnostics and therapeutic outcomes.
In addition to healthcare, generative tasks are becoming increasingly important in fields like education, business strategy, and cybersecurity. For instance, SMOOTHIE could assist in developing smarter, contextually-aware educational tools that adapt in real-time to student needs, facilitating more personalized learning experiences. In business, the algorithm might enhance decision-making processes by generating data-driven insights for marketing, sales strategies, or supply chain optimizations.
Another promising application lies in the realm of content creation, particularly for digital media. SMOOTHIE’s ability to generate diverse outputs from a limited set of inputs could be leveraged to create unique and engaging content for industries like gaming, film production, and advertising. By optimizing generative models for these use cases, businesses could save time and resources, while simultaneously improving the quality of the content produced.
As researchers continue to refine SMOOTHIE and similar algorithms, there's also room to explore its potential for enhancing cybersecurity frameworks. By enabling routers to generate novel patterns for anomaly detection, this technology could improve the accuracy of identifying potential threats and bolster overall system security.
To build upon this groundbreaking work, researchers might explore the integration of SMOOTHIE with existing generative models, such as GANs (Generative Adversarial Networks) or VAEs (Variational Autoencoders), to further increase the efficiency and scalability of the algorithm in diverse applications. Additionally, improving the algorithm’s robustness to various input types and enhancing its ability to handle large, unstructured datasets could significantly broaden its scope and impact across multiple sectors.
Conclusion
The introduction of the SMOOTHIE algorithm by Stanford researchers is a significant development in machine learning, particularly for its potential to optimize hyper-parameters without requiring a complete learner process. This methodology has profound implications in enhancing machine learning model efficiency by reducing computational time and resource use. The key takeaway is that it offers a label-free approach, which is critical for environments where labeled data is scarce or unavailable. This approach could significantly impact generative tasks, making them more efficient and accessible, driving forward machine learning capabilities in various fields
Press contact
Timon Harz
oneboardhq@outlook.com
Other posts
Company
About
Blog
Careers
Press
Legal
Privacy
Terms
Security