Deepware

Deepware is an AI safety startup that builds tools to help align machine learning models with human values. Here's a review:

Constitutional AI: Deepware's flagship technology uses a technique called self-supervised constitutional learning to help AI systems avoid harmful, unethical or deceptive behavior.

Value specification: Their tools allow researchers to formally specify preferences over system behaviors and outcomes in a machine-readable format, to ensure the models behave as intended.

Monitoring: Deepware's platform continuously monitors AI systems to detect any drift from desired values over time as they are deployed and trained further.

Alignment incentives: The platform incorporates the specified values into the training process through reward shaping to intrinsically motivate aligned behavior without external interventions.

Multi-objective optimization: It can optimize models for multiple objectives like accuracy, fairness and transparency simultaneously instead of just pure performance.

Explainability: The models are also designed to clearly explain their decisions and behavior to build user trust through accountability.

Open source: Deepware shares their research through peer-reviewed papers and provides open technical documentation for their techniques.

By placing "guardrails" around machine learning that respect human values and priorities, Deepware is working to ensure AI progress is robust, beneficial and avoid potential harms from misaligned systems.