This project has been generated by react-native-make.
To be able to install and run the mobile apps (iOS and Android) and web app, you first need to :
- Setup your environment
- Configure Sentry through this tutorial
To run the mobile app on Android or iOS, you will need to follow the installation steps:
- Install the Android app
- Install the iOS app
To run the web app on your browser, follow the steps here
Access to the storybook.
To run the storybook on your browser, follow the steps here
Access to the Allure report.
Starting in react-native 0.76, we use the integrated dev tools
In the doc/
folder you will find the dev standards
the team members follow:
To add a dev standard
Standards can of course be improved and new ones can be added.
- Create a pull request with the standard modification/addition (use
TEMPLATE.md
for addition) - Ask all team members to read your PR
Why: so that the team is aligned on how to code, and the best way to do something is shared within all members
- Make sure you got the approval of every member of the team
- You can merge :)
You can run the tests with yarn test
. This command will:
- Run
eslint
on your project - Check the typescript types
- Run the
jest
tests
You can run the jest tests in watch mode with:
yarn jest --watch
You can also get the coverage with:
yarn jest --coverage
π Update the API schema
If the backend changes the api schema, you will need to update it:- pull the
swagger-codegen-cli-v3
image:docker pull swaggerapi/swagger-codegen-cli-v3
- run:
yarn generate:api:client
- or run
yarn generate:api:client:silicon
on Apple Silicon chips
If the file src/api/gen/.swagger-codegen/VERSION
changes, make sure you locally have the desired version of swagger-codegen-cli
, otherwise run docker pull swaggerapi/swagger-codegen-cli-v3:3.0.24
To develop with a local API
See the docs to learn how to develop with a local API "superficially".
The other option, more complex, is to create a specific scheme 'Development' with a .env.development
file :
copy the .env.testing
configuration and update the API_BASE_URL
setting with you local server address.
Make sure you also overload the BATCH_API_PUBLIC_KEY_ANDROID
and BATCH_API_PUBLIC_KEY_IOS
variables with the dev values of the testing batch project.
Then copy testing.keystore
into development.keystore
and testing.keystore.properties
into development.keystore.properties
. Replace the storeFile
value in development.keystore.properties
.
Test login credentials
See in Keeper for all testing accounts.
To download the testing app, visit Appcenter for iOS and Android. For the staging app, use these links for iOS and Android.
See doc about deployment process here for the mobile application.
You can find most of the code related to performance measurement in src/performance
.
For local development, you can monitor the performances by pressing d
in your metro console. The developer menu should popup on your emulator. Press the "performance monitor" button. You can track the JS thread, UI thread and RAM usage on features you are developing.
Thanks to maestro and flashlight, we are able to have a measure for each version of the app in the CI.
Additionally, you can add a tag e2e perfs
to your PR to get the flashlight performance score of your feature.
Here is an example of a flashlight performance report:
===== Aggregated Performance Summary =====
- Overall Score 82.00 / 100
- Successful Iterations 10 / 10
===== Averaged Metrics (across all successful runs) =====
- Average FPS 39
- Average RAM Usage 250 MB
- Average Total CPU 51 %
===== Average CPU Usage Per Thread =====
- UI Thread 4.45 %
- mqt_js 0.00 %
- RenderThread 30.27 %
Starting in the version v344, to measure performances, we have decided to measure the time to interactive of the home (see adr)
At the moment, it would seem that the measure is more reliable on iOS (between 10 and 20 seconds) than on Android (where we are getting ttis of more than 40 seconds). Because of the way we measure the tti (we trigger an event when the Home finishes loading) if any screen interposes itself between the start of the app and the Home, the time the user spends on that screen will be counted in the tti, thus producing erroneous measures. We are still investigating possible issues.
You can find this measure on firebase performance monitor for the different environments:
We currently use lighthouse to measure performances on the web app. There is a job that runs lighthouse in the CI once a week, you can see the runs here.
By running the performance test once a week, we have a measure for each version of the app. Here is an example of a report:
--- Lighthouse Performance Summary ---
π’ Performance Score: 48 / 100
FCP: 0.8 s
LCP: 16.6 s
TBT: 590 ms
CLS: 0.108
------------------------------------
You can also run lighthouse locally from your browser's dev tools (make sure to run them in incognito mode to avoid extensions degrading performances).