Table of Contents
Transform Your Mental Wellness with AI
Your personal AI-powered journal that helps you track, understand, and improve your mental wellbeing through personalized insights and guidance.
-
npm
npm install npm@latest -g
-
Docker
Follow the official instructions.
-
ngrok
brew install ngrok
Or follow the official instructions for installation and signup.
-
Clerk
We use Clerk for user authentication and session management. Create a Clerk account on the official website.
- Note: A test clerk account is set up for the course instructors. Visit Confluence or contact us for details.
-
Setup a static domain for ngrok by following the official instructions.
Then, run
ngrok http 8085 --url=<YOUR_STATIC_DOMAIN>
This exposes port 8085 (thus the API gateway) to the internet.
-
Create a
.env
file in the root directory by copying the.env.example
files. -
Setup Clerk webhook
In order to sync users from Clerk to the local user DB, Clerk needs a way to communicate with the application. This is done through webhooks - every time a new user registers using Clerk, Clerk sends a request to our application containing the user's detailed information.
Sign in to your Clerk dashboard. Go to Configure -> Webhooks (under Developers). Click on "Add Endpoint". Under endpoint URL, enter
https://<YOUR_STATIC_DOMAIN>/api/webhooks
. Below, subscribe to alluser
events:user.created
,user.deleted
, anduser.updated
. Finally, click "Create".Click into the webhook endpoint you just created. On the right side of the page, there should be a field "Signing Secret" with a value that starts with
whesc_...
. Copy that value and paste it into.env
'sCLERK_WEBHOOK_SECRET
variable. -
Set up Clerk key pair
Under Configure -> API keys (under Developers), select "React" in the top-right dropdown menu of "Quick Copy". Then, copy over the
VITE_CLERK_PUBLISHABLE_KEY
to.env
.Finally, on the same page, under "Secret keys", add a new key and copy the value of the secret key over to
.env
'sCLERK_SECRET_KEY
. -
Build and run using Docker In the root folder, run
docker compose up --build
Access the application through
http://localhost:3000
.
-
Sign in / register using the button in the top-right corner.
-
Navigate to the journal page by clicking either "My Journal" next to your user avatar, or "Start Your Journalling Now" in the middle of the homepage.
-
Click on "Add Snippet" or "Quick Entry" button to start writing snippets. Select your current mood, write some short sentences about what's on your mind, and save it. You can also optionally add tags to your snippets for searching later.
-
After you have written at least three snippets, you will be prompted to "Create Journal". In the journal editor, you can:
- Click on "Today's Journal" in the top bar to change the title of the journal.
- Click on "Edit" or directly click the journal content to write your daily journal.
- Need inspiration? Click "Regenerate Journal" to let the Gen-AI summarize your snippets. You can use the result as a starting point.
- Once you've written something, you can click on "Generate Insights" to let the Gen-AI analyze your journal entry - mood pattern, suggestions, tips and so on. View the comprehensive analysis of your day by clicking "Insights" on the top-right.
- You can search for, filter, and view old journals in the tab "Previous Journals".
-
In the "Overview" tab, you can see statistics of your journaling habit as well as your well-being trends.
ZenAI utilizes generative AI to provide value to the user by using carefully put-together prompts. The two concrete use cases are:
- Summarization/Generation of journal entry content. This is intended to give users inspiration and serve as a basis/draft journal entry that the users can modify and improve upon with their own words. It also lowers the mental hurdle of starting to write a journal.
- Analysis of well-being insights. This aims to provide users with materials generated based on their journal entries, shedding lights on their mood pattern, and providing helpful tips. It allows for a deeper introspective perspective on one's self.
ZenAI uses GitHub Actions for continuous integration and deployment for automated testing, building, and deployment workflows.
The main CI workflow validates code quality (linting) and runs tests across all services with path-based triggering:
-
Client Testing (
ci.yml
)- Triggered only when client code changes
- Node.js 22 setup and dependency installation
- ESLint code linting
- Build verification with Vite
-
Server Testing (
ci.yml
)- Triggered only when server code changes
- Java 21 setup with Gradle build system
- Matrix strategy testing all microservices (API Gateway, Journal, User)
- Unit tests execution via
./gradlew test
- Build verification
-
GenAI Service Testing (
ci.yml
)- Triggered only when GenAI service code changes
- Python 3.11 environment setup
- Dependency installation and Ruff linting
- FastAPI server health checks
- Background service testing
-
Helm Chart Validation (
ci-kubernetes.yaml
)- Triggered on changes to Helm charts
- Kubernetes and Helm setup
- Chart linting and template rendering tests
- Validates Kubernetes deployment configurations
- Triggers directly on main branch pushes (no need to wait for CI since tests already passed on the PR)
- Multi-architecture builds (linux/amd64, linux/arm64)
- Pushes images to GitHub Container Registry (ghcr.io)
- Services built: client, api-gateway, journal-microservice, user-microservice, genai
- Triggered after successful Docker image builds
- Deploys to AET Kubernetes cluster using Helm
- Updates all services with latest images
- Namespace:
zenai-team
- Manual EC2 deployment using GitHub Actions and Ansible
- Triggered manually via workflow_dispatch or on push to main/feat/aws-deployment branches
- SSH-based deployment to pre-provisioned EC2 instances
- Docker Compose orchestration with monitoring stack
- Terraform-based EC2 instance provisioning on AWS
- Ansible configuration management
- Triggered on infrastructure code changes
- Pull Requests: Run CI tests for changed components only
- Main Branch Push: Full CI/CD pipeline with automatic deployment
- Manual Dispatch: AWS EC2 deployment and Terraform operations
- Path-based Triggers: Intelligent triggering - only affected services are tested and rebuilt
- Client changes → Client CI only
- Server changes → Server CI only
- GenAI changes → GenAI CI only
- Helm changes → Kubernetes validation only
ZenAI includes comprehensive monitoring with Prometheus and Grafana to track request time, request latency and error rate.
When running the application with Docker Compose, monitoring services are automatically started:
-
Prometheus - Available at
http://localhost:9090
- Collects metrics from all microservices (the microservices can be seen under Status -> Targets)
- Provides a web interface to view metrics and create queries (under Graph queries like
sum(http_server_requests_seconds_count{job="api-gateway"})
can be executed) - Stores time-series data for historical analysis
-
Grafana - Available at
http://localhost:3001
- Credentials: username is
admin
/ password - contact us via Artemis - Pre-configured dashboards (request count, request rate, request latency, max request latency and error rate) for all microservices:
- API Gateway metrics
- Journal Microservice metrics
- User Microservice metrics
- GenAI Service metrics
- Credentials: username is
-
Access Grafana Dashboard: Access to Grafana can be found in our Rancher project:
https://zenai-team.student.k8s.aet.cit.tum.de/grafana
Credentials: username isadmin
/ password - contact us via Artemis -
Access Prometheus:
kubectl port-forward -n zenai-team deploy/prometheus 9090:9090
If the port is already allocated, try another one:
kubectl port-forward -n zenai-team deploy/prometheus 9091:9090
Then visit
http://localhost:9090
in your browser. The available targets can be found under Status -> Targets Under Graph queries likesum(http_server_requests_seconds_count{job="api-gateway"})
can be executed. -
View Application Logs:
# View logs from specific service kubectl logs -l app=zenai-api-gateway-selector -n zenai-team kubectl logs -l app=zenai-journal-selector -n zenai-team kubectl logs -l app=zenai-user-selector -n zenai-team kubectl logs -l app=zenai-genai-selector -n zenai-team
Grafana comes pre-configured with custom dashboards located in:
grafana/provisioning/dashboards/
(Docker setup)helm/files/grafana/dashboards/
(Kubernetes setup)
You can import additional dashboards or modify existing ones through the Grafana web interface (Dashboards -> New Dashboard).
Prometheus is configured with alert rules for:
- Service unavailability Alert rules can be seen in Prometheus web interface under Status -> Rules or under Alerts (firing means the service was down for at least 1 min)
AWS EC2 deployment using GitHub Actions and Ansible for automated deployment is also supported. You can provision the infrastructure using either Terraform (automated) or manual setup.
- AWS Account with appropriate permissions
- GitHub repository with admin access to configure secrets
- Terraform installed locally (for infrastructure provisioning)
- AWS CLI configured with your credentials
- Run
aws configure
to set up your AWS Access Key ID, Secret Access Key, and default region - For temporary credentials (like from AWS Academy or SSO), you may also need to set session tokens:
aws configure set aws_session_token YOUR_SESSION_TOKEN
- Verify your configuration with:
aws sts get-caller-identity
- Run
Note: This step is only needed if the S3 bucket for Terraform state doesn't exist yet.
-
Check if bucket exists:
aws s3 ls s3://zenai-terraform-state-bucket
-
If bucket doesn't exist, run the setup script:
cd infra/scripts ./setup-terraform-backend.sh
-
If bucket already exists, skip this step and proceed to GitHub secrets configuration.
Add the SSH private key to your GitHub repository for deployment access:
- Go to your GitHub repository → Settings → Secrets and variables → Actions
- Click New repository secret
- Add the following secret:
- Name:
EC2_SSH_PRIVATE_KEY
- Value: Complete contents of your EC2 private key (.pem file)
- Name:
To get your private key contents:
cat ~/.ssh/your-ec2-key.pem
Prerequisites: Ensure AWS CLI is configured (see Prerequisites section above).
Run Terraform locally:
cd infra/
terraform init
terraform plan
terraform apply
Note: Terraform will use your AWS CLI credentials. If you're using temporary credentials (session tokens), make sure they're still valid before running Terraform commands.
The Terraform configuration will create:
- VPC with public subnet
- Security groups (SSH, HTTP, application ports)
- EC2 instance (t3.medium with 30GB encrypted storage)
- Elastic IP address
You can customize the infrastructure by creating your own terraform.tfvars
file or modifying the defaults in infra/variables.tf
:
Default values (defined in variables.tf):
region = "us-east-1"
ami_id = "ami-084568db4383264d4" # Ubuntu 20.04 LTS in us-east-1
instance_type = "t3.large"
key_name = "vockey" # Default AWS Academy key pair name
To customize, create infra/terraform.tfvars:
region = "us-west-2" # Change region if needed
ami_id = "ami-your-custom-ami" # Use different AMI if needed
instance_type = "t3.medium" # Smaller instance type
key_name = "your-key-pair-name" # Your actual key pair name
If you prefer manual setup, launch an EC2 instance with the following configuration:
- AMI: Ubuntu Server 20.04 LTS or later
- Instance Type: t3.medium or larger (recommended for Docker workloads)
- Security Group: Allow inbound traffic on:
- Port 22 (SSH)
- Port 3000 (Client)
- Port 8085 (API Gateway)
- Port 3001 (Grafana)
- Port 9090 (Prometheus)
- Key Pair: Create or use existing key pair for SSH access
Regardless of which infrastructure option you choose, configure these GitHub secrets for application deployment:
Secret Name | Description | Example |
---|---|---|
EC2_SSH_PRIVATE_KEY |
Contents of your EC2 private key (.pem file) | -----BEGIN RSA PRIVATE KEY-----\n... |
GENAI_API_KEY |
API key for GenAI service | your-genai-api-key |
GENAI_API_URL |
URL for GenAI service endpoint | https://gpu.aet.cit.tum.de/api/chat/completions |
VITE_CLERK_PUBLISHABLE_KEY |
Clerk publishable key for client | pk_test_... |
CLERK_SECRET_KEY |
Clerk secret key for backend authentication | sk_test_... |
CLERK_WEBHOOK_SECRET |
Clerk webhook secret for user sync | whsec_... |
CLERK_AUTHORIZED_PARTY |
Clerk authorized party URL | http://YOUR_EC2_IP:3000 |
MONGO_DB_URI_USER |
MongoDB connection URI for user database | mongodb://user-db:27017/userdb |
MONGO_DB_URI_JOURNAL |
MongoDB connection URI for journal database | mongodb://journal-db:27017/journaldb |
GF_SECURITY_ADMIN_PASSWORD |
Password for Grafana admin user | secure-password |
Variable Name | Description | Example |
---|---|---|
EC2_PUBLIC_IP |
Public IP address of your EC2 instance | 54.123.45.67 |
- Go to your GitHub repository
- Click Settings → Secrets and variables → Actions
- Click New repository secret
- Add each secret with the exact name and value
For EC2_SSH_PRIVATE_KEY:
# Copy the entire contents of your .pem file
cat ~/.ssh/your-ec2-key.pem
Copy the output (including -----BEGIN RSA PRIVATE KEY-----
and -----END RSA PRIVATE KEY-----
) and paste as the secret value.
- In the same Actions secrets page, click the Variables tab
- Click New repository variable
- Add
EC2_PUBLIC_IP
with your instance's public IP (for manual setup) or use the Terraform output
- Local Commands: Run
terraform init
,terraform plan
, andterraform apply
locally - State Management: State is stored in S3 bucket for persistence
- Outputs: After
terraform apply
, you'll get the EC2 instance public IP - add this to your GitHub variables
- Launch an EC2 instance manually through AWS Console
- Note down the public IP and add it to GitHub variables as
EC2_PUBLIC_IP
- Go to Actions tab in your GitHub repository
- Click Deploy to EC2 workflow
- Click Run workflow
The deployment happens automatically when:
- You push to the main branch (Docker images get built first, then deployment follows)
- Or you can trigger it manually if needed
The deploy_aws.yml
workflow performs these steps:
- Validation: Verify all required secrets and variables are configured
- Checks for missing EC2_SSH_PRIVATE_KEY, GENAI_API_KEY, CLERK keys, and GF_SECURITY_ADMIN_PASSWORD
- Validates EC2_PUBLIC_IP variable is set
- Fails early with clear error messages if any configuration is missing
- Setup: Checkout code and configure SSH keys
- Test Connection: Verify SSH access to EC2 instance
- Install Ansible: Set up Ansible on the GitHub runner
- Deploy: Run Ansible playbook to deploy the application
- Verify: Test if services are responding
- Summary: Provide deployment status and access URLs
- Session Dependency: Since we're using AWS Learner Lab, the lab session must remain active for the deployed application to work
- Service Downtime: When the AWS Learner Lab session expires (every 4 hours) or is closed, the deployed application will become inaccessible
- Manual Recovery: If the services go down after lab session restart, you'll need to manually restart them:
ssh -i ~/.ssh/your-key.pem ubuntu@YOUR_EC2_IP cd /home/ubuntu/app docker compose up -d
- Terraform Updates: If you make changes to the Terraform infrastructure code, you'll need to manually run the deployment workflow again since there's no automated infrastructure CI/CD pipeline
- Manual Coordination: Infrastructure changes and application deployment need to be coordinated manually
- Keep Lab Active: Make sure to keep your AWS Learner Lab session active while demonstrating or using the deployed application
- Monitor Services: Check service status regularly if you notice the application is unresponsive
- Document IP Changes: If you recreate/change the infrastructure, you might need to update the
EC2_PUBLIC_IP
variable in GitHub.
After successful deployment, your application will be available at:
- Client:
http://YOUR_EC2_IP:3000
- API Gateway:
http://YOUR_EC2_IP:8085
- Grafana:
http://YOUR_EC2_IP:3001
(admin/YOUR_GF_SECURITY_ADMIN_PASSWORD) - Prometheus:
http://YOUR_EC2_IP:9090
Note: ZenAI is currently also deployed on AWS and accessible at: http://54.158.147.171 (most likely not active if AWS Learner Lab session is not running)
If the deployment fails during the validation step:
- Check GitHub Secrets: Go to Repository Settings → Secrets and variables → Actions → Secrets
- Verify Required Secrets: Ensure all required secrets are configured with exact names:
EC2_SSH_PRIVATE_KEY
,GENAI_API_KEY
,VITE_CLERK_PUBLISHABLE_KEY
CLERK_SECRET_KEY
,CLERK_WEBHOOK_SECRET
,GF_SECURITY_ADMIN_PASSWORD
- Check Variables: Go to Variables tab and verify
EC2_PUBLIC_IP
is set - Review Error Messages: The workflow provides specific guidance on missing configuration items
- Verify EC2 security group allows SSH (port 22) from GitHub Actions IPs
- Check that the private key is correctly formatted in the secret
- Ensure the EC2 instance is running and accessible
- SSH into the instance and check Docker logs:
ssh -i ~/.ssh/your-key.pem ubuntu@YOUR_EC2_IP cd /home/ubuntu/app docker compose logs
- Verify all required secrets are configured in GitHub
- Check that variable names match exactly (case-sensitive)
- The deployment workflow now validates configuration before attempting deployment
ZenAI's APIs for every single microservice (user, journal, genai and api-gateway) are documented using Swagger, and are accessible by visiting
- Local:
http://localhost:8085/api/swagger-ui.html
- Deployed:
https://zenai-team.student.k8s.aet.cit.tum.de/api/swagger-ui.html
in your browser and select the microservice for which you would like to see the API documentation in the dropdown list in the top-right corner.
Contributor | Responsibilities |
---|---|
Natalia Milanova | Backend CRUD operations, AI summarization functionality, Kubernetes deployment, Monitoring |
Evan Christopher | Client implementation + testing, CI pipelines, Infra provisioning with Terraform, AWS deployment, Overall code refactoring across features |
Zexin Gong | API gateway and authentication, Backend service tests, Overall testing & Bug fixes, Documentation |