This is a repository for an e-commerce Android app template.
work in progress ๐ง
This application code base is structured by taking into account the official architecture guidance. This means following an MVVM approach to layers, modularisation of the different layers to keep the code as independent as possible, and adherence to the Clean design philosophy. For the network layer, a GraphQL service will be responsible for handling all our requests, and we will be using Apollo Kotlin as a client on our end. On the UI side, we are using Compose for every screen and feature.
The application is composed by these main modules:
- App: main application entry-point
- Network: core request logic and generic response handling
- Data: responsible for integrating FPS services
- Domain: contains the business logic and has no dependencies from other modules
- DesignSystem: style system and UI components
- Feature: UI features and screens
- The interface should have a simple name (
BagRepository) and the class implementing it should have theImplsuffix (BagRepositoryImpl) - Data Models
- API response models (DTOs) are generated by Apollo Kotlin using the declared
*.graphqlscript files, contained within thedatalayer - in this example, the class would be generated asGetBagQuery.Data - The main model should be a simple name (
Bag) and be defined in the Domain Layer, exposed from the Data Layer and exposed to the Presentation Layer, to avoid having dependencies on the Domain Layer - The UI model should be named with
UIsuffix (BagUI) and be used only in the Presentation Layer
- API response models (DTOs) are generated by Apollo Kotlin using the declared
- When necessary, Database models should be named with
Entitysuffix (BagEntity)
GraphQL queries are stored in the data layer, within the graphql directory. Each should be within its own subdirectory, to keep everything as organized as possible. The following is an example of how it should look like:
- graphql
- account
- GetUser.graphql
- UpdateUser.graphql
- bag
- GetBag.graphql
- AddItem.graphql
- product
- GetProduct.graphql
- account
Jetpack Compose is the default for every UI feature and screen.
Each major app feature should be implemented as a module, so that its screens and ViewModels are completely inaccessible from other feature modules, thus keeping everything properly compartmentalized. An example structure of modules could look like the following:
- feature
- account
- bag
- checkout
- home
- pdp
- wishlist
Theme default values and stylistic options are to be defined in the designsystem module, including colors, dimensions, shapes, fonts, etc.
Reusable components such as buttons, badges, loaders, among others, should be implemented on this module, so that they can easily be used in multiple screens whenever necessary.
Within each feature module, we add everything necessary to connect to the domain layer and construct the user interface and experience. This is usually consists in creating some files, which follow some software design patterns:
- UI models: they represent the data to be shown on screen
- Factories: the preferred way to create UI models, generally using data coming from the
domainlayer - avoid the usage of simple mappers, as those become harder to expand upon - ViewModels: where the state and logic for each screen is handled
- UI screens: where Composables are defined - should only have the logic strictly necessary to show the state, as defined by the ViewModel
Compose Destinations is the library used to support the navigation in the application.
As of now, Destinations code generation is enabled only for the :feature:* modules (through the alfie.feature plugin) and the screens should be implemented on those modules. If a screen needs to be implemented on a different module, extra setup will be needed to activate the code generation on that module.
- Create the composable for the screen on the feature module
- (if has arguments) Create a
*NavArgsdata class on theargumentpackage of the:core:navigationmodule with the arguments for that screen - Annotate the screen composable with
@Destination - (if has arguments) Pass the previously created
*NavArgsclass on thenavArgsDelegateparameter of the@Destinationannotation - Run the code generation. This can be achieved by building the project or running
gradle kspDebugKotlin - Add the generated class to the
NavGraphsobject (:appmodule). It is usually added to therootnav graph, but new nav graphs can be created - Create sub-class for that screen on the
Screensealed class (:core:navigationmodule)- If has no arguments, it can be a
data object - If has arguments, it can be a
data classwith theargs: *NavArgsas a field
- If has no arguments, it can be a
- On the
DirectionProviderImpl(:appmodule) add the mapping from the newly created sub-class to the generated Destinations class for that screen
Creating a screen accessible only on its feature module is similar to creating a screen with global access (as described on the previous section) but without doing the steps 7 and 8.
On the step 2, the *NavArgs class can be created on the feature module instead of the :core:navigation module.
For the library, the bottom sheets are treated as destinations (like the screens). Creating a bottom sheet is similar to creating a screen with a small adaptation on the step 3: the class DestinationStyleBottomSheet::class should be passed on the style parameter of the @Destination annotation.
Dismissing the bottom sheet is as simple as executing popBackStack() (or navigateUp()) on the DestinationsNavigator.
The dialogs can also be considered destinations by the library. Similarly to the bottom sheets, the style DestinationStyle.Dialog::class can be used.
For simple dialogs (e.g. confirmation dialogs with actions) it might be easier to implement them in the normal way instead of creating a destination.
- Inject the
DirectionProvideron the screen composable and use thefromScreenfunction to get theDirectionfrom theScreensub-class - Inject the
DestinationsNavigatoron the screen composable andnavigateto theDirection.
If the destination is on the same module, there is no need to do the step 1, as the Destination class can be used directly on the step 2.
When using navArgsDelegate, the destination arguments can be obtained on the ViewModel through the SavedStateHandle (which can be injected with Hilt). The navArgs() extension can be used on the SavedStateHandle to get the arguments class.
If the destination has no ViewModel, it can be obtained through the NavBackStackEntry (which can be injected on the screen composable). The argsFrom(navBackStackEntry) function of the destination class can be used to get the arguments.
The name of the branch should start with the ticket name followed by branch specific name [JIRA-ticket]_awesome_feature
Branches should be contained in subdirectories (feature/, bugfix, chore release/ or hotfix/) making them more manageable and easier to organize.
featureis the subdirectory for every ticket that adds new code to the repositorybugfixshould be used for every bug fix that is not being immediately deployed into productionchoreused for simple maintenance tasks which do not require going through QAreleaseis to be used only when creating new app versionshotfixis intended for bug fixes which will be applied to production as soon as possible
Taking these guidelines as a reference, the commit message should represent the nature of the work as well as the ticket associated with it, so it is easier to later on understand the context in which the change was done.
The template of the commit message is {[JIRA-ticket]} {desc} and as an example of a feature commit associated with ticket XXAA-1 it would be [XXAA-1] Awesome feature boilerplate
- A pull request needs at least 2 approvals before being sent for testing by QAs or merged
- If there are enough approvals but there are pending comments, those need to be addressed and resolved before testing or merging
- In case of UI additions or changes, please try to add a screenshot, video or GIF to make it easier to understand
- Always add a comment explaining the context of the work
- Squash merge cleaning up message history if needed and follow the commit message convention as specified above
- If reviewing, you are responsible for resolving the discussions if you're OK with the reply or changes done
- If MR owner, reply with "Done" or react with ๐ and avoid resolving the comment since it will resolve the discussion and is harder for the reviewer to pinpoint the changes
Follow the Gitflow Workflow for creating feature branches as well as managing releases and hotfixes or any other type of work that might be needed
Lint is one of the validation steps for any pull request. For that, the tool Detekt is used.
The tool configuration can be found in config/detekt/detekt.yml. That's where all rules are defined and can be configured.
To run the tool, execute the following Gradle task: gradle detekt
When lint fails, follow one of the following approaches:
- Fix the issue pointed by the tool
- You can attempt to fix it automatically with the
--auto-correctflag - this only works for formatting issues - If you think the pointed rule should be changed, ask the team and if everyone agrees edit the configuration file
- If you think you have an exception to the rule, either:
- add it manually to the baseline file
- automatically generate the baseline file by running the following Gradle task:
gradle detektProjectBaseline- Attention: this will add all the identified issues to the baseline. Make sure to only run this task when the exceptions are the only issues identified.
Besides the detekt Gradle task, we suggest using the Detekt Android Studio plugin in order to have the lint warnings in the code (the plugin can
be configured with our configuration file to follow the same rule set).
Workflows are a set of steps that can have the usual required actions to run a pipeline such as cloning the branch, restoring cache or deploy run results.
It also supports Fastlane integration where we can set lane calls and pass arguments we might need. This also allows to offset most of the Android specific tasks to Fastlane and keep CI responsible only for the flows and steps.
- [branch_validation]
Detekt: check if linting rules are appliedUnit Tests: checks if all modules' tests pass
Each trigger type requires values that can be set as a regex to match different branch names we want the triggers to take effect. We also need to assign a workflow we want to be run when these are triggered. Three main types of triggers can be set and used:
Push: needs a format for thepush branchand might be useful to run specific tasks before opening a PR, but currently not being usedPull Request: needs a format forsource branchand thetaget branchand it will be ran when opening a Pull Request as well as any commit that is pushed afterwards. This is the main trigger being used throughout the development lifecycle and is also enabled for Draft Pull RequestsTag: needs a format for thetagand will run the associated workflow when a tag is pushed and is currently not being used (review on releases distribution)
Besides the dashboard UI oriented configuration available, we can also use the more traditional configuration file. This is pushed to the repository and then CI can pick it up and run the pipeline accordingly. This way we have full control of the versioning and it can also go through the normal peer review process as any other change to the project.
The dashboard visual representation and the YAML configuration file are interchangeable so a change in one will be reflected on the other. This offers the flexibility to use the approach that best works for you, keeping in mind that any change still needs to update the configuration file.
One downside is that the YAML file will have the configuration for all the workflows, which can make the file quite busy, so we should put as much tasks in the Fastfile as possible, also making us more futureproof in case the CI/CD tool changes.
The Workflows used for CD will be chained with the branch_validation workflow and only when it has successfully finished, which means that each workflow will be responsible for its build variant and we can reuse the branch_validation workflow for shared tasks we need.
Every time Pull Request is created that does not match a release branch format, it will kick the delivery_firebase_debug workflow and distribute it to the QA group
- [delivery_firebase_debug]
Debug Build: check if build is ran successfully for Debug variantChained workflow: runs thebranch_validationworkflow beforetrigger:pull requestwhere the source and target branch matches any branch name
Every time there is a change pushed into master it will kick the delivery_firebase_debug workflow and distribute it to the QA group
- [delivery_firebase_debug]
Debug Build: check if build is ran successfully for Debug variantChained workflow: runs thebranch_validationworkflow beforetrigger:pull requestwhere the source and target branch matches any branch name
The release branch will need to have the format release/Alfie-M.m.p and once a Pull Request is created, it will kick the delivery_firebase_beta workflow and distribute it to the Mindera group
- [delivery_firebase_beta]
Beta Build: check if build is ran successfully for Beta variantChained workflow: runs thebranch_validationworkflow beforetrigger:pull requestwhere the source matchesrelease/Alfie-M.m.pand the target branch can be any nameversions: will update the name and code versions only once in case the gradle version does not match the branchM.m.p
The pushed tag will need to have the format release-M.m.p and once pushed, it will kick the delivery_release workflow and distribute it to the Mindera group
- [delivery_release]
Release Build: check if build is ran successfully for Release variantChained workflow: runs thebranch_validationworkflow beforetrigger:tagthat needs to be pushed with the formatrelease-M.m.p
There are different target audiences depending on the type of build we are distributing:
QA: Quality Assurance team will receiveMindera: will include theQAgroup as well as the rest of the internal Mindera product teamAlfie: Alfie stakeholders
work in progress ๐ง
We aim to achieve the highest test coverage by area/class responsibility instead of overall project coverage percentage. By doing so, we make sure that we important logic and state handling
- Data
- DTO to Domain mapping
- Mappers/Factories
- Domain
- Use Cases
- Business logic
- Mappers/Factories
- Presentation
- View Model state/events handling
The code coverage report can be generated by running the task gradle :app:koverHtmlReportRelease.
Some filters are being applied in order to have the coverage metrics only for the testable files/classes/functions. If new filters are needed, they can be added on the AppConventionPlugin.kt configuration.