We believe that the true power of AI lies in its ability to make a meaningful, positive impact on people's lives.
We develop AI tools that help businesses create stronger, healthier, and more responsible connections with customers.
Our tools are designed to improve decision-making, enhance customer experiences, and foster positive relationships that benefit both organizations and their customers.
The Ethical Advisory Board meets regularly and on an ad-hoc basis. They play a crucial role in inspiring, informing, challenging, and recommending ethical guidelines for our AI projects.
Thomas Z. Ramsøy is a renowned neuroscientist and a leading figure in applied neuroscience. He has formely led the Center for Decision Neuroscience at CBS and Copenhagen University Hospital.
Helle Thorning-Schmidt, 26th Prime Minister of Denmark, former CEO of Save the Children Int., and Board Member at Meta, Vestas, Edelman & more. She’s also known for her impactful global advocacy.
Since 1997, Jan Trzaskowski has dealt with legal, regulatory and ethical aspects of technology and digital marketing. He has authored many books and articles on these subjects.
Kris Østergaard is the bestselling author of Transforming Legacy Organizations, editor of the anthology Ethics at Work and Head of Research & Publishing at Rehumanize Institute.
Dr. Imran Rashid, a prominent medical doctor and author, is acclaimed for his impactful books on digital health. As Health Director at Lenus.io, he pioneers innovative digital health solutions.
We recognize the impact of our AI technologies, and make our decisions based on their implications for human well-being.
Our approach includes:
Ethical Advisory Board: Our Ethical Advisory Board (EAB) consists of diverse experts who regularly provide guidance on ethical guidelines in our AI projects.
Ethical Partnerships: Our partnerships are in alignment with Neurons' values, prioritizing collaborations with companies that pursue positive societal change.
Responsible Innovation: We consider the broader implications of our innovations, aiming to avoid solutions that might lead to ethical dilemmas or societal harm.
We safeguard participant privacy, maintaining AI model integrity, and supporting the development of unbiased, representative solutions.
Our practices include:
Protection of Individual Privacy: We implement stringent security measures to safeguard data, making sure all models are developed using anonymized, non-identifiable information.
Fairness and Bias Prevention: We are dedicated to minimizing bias and discrimination in our AI models by collecting diverse datasets and using rigorous testing methodologies.
Global and Inclusive Data Collection: We source data from a globally diverse population to develop inclusive AI models that accurately reflect a wide array of consumer experiences.
How we collect and store data
We have thorough data anonymization processes in place, so that individual responses are never identifiable. We strictly comply with international regulations like GDPR and the Declaration of Helsinki, underscoring our commitment to data protection and privacy. We are also SOC 2 Type II certified.
Read moreWe make our AI models reliable, based on peer-reviewed methods, and maintain high standards of transparency and accountability.
Our methods include:
Scientific Foundation and Validation: Our data collection methods are grounded in peer-reviewed scientific research and established scientific standards.
Sensitivity and Specificity: Our metrics are carefully selected for their sensitivity, specificity, reliability, and relevance to industry needs, making our models are accurate & applicable.
Continuous Improvement: We continuously update our datasets to capture the latest trends and consumer sentiments, making sure our AI models remain current and accurate.
You can learn more about our scientific methods here.
We train our AI models with a strong focus on fairness and security.
Neurons’ AI training process is transparent, allowing stakeholders to understand model development and gain insights into how our AI reaches its conclusions. This explainability is foundational for building trust and reliability in our models.
We use diverse data preprocessing methods such as regularisation measures to minimize overfitting and prevent bias. We also continuously monitor our models and conduct extensive testing to make our models reliable across applications.