Skip to content

pass-culture/pass-culture-app-native

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation


Quality Gate Status Coverage

Current tests Testing Environement Staging Environement

Tag

This project has been generated by react-native-make.

Getting Started

To be able to install and run the mobile apps (iOS and Android) and web app, you first need to :

πŸ“± Mobile

To run the mobile app on Android or iOS, you will need to follow the installation steps:

πŸ’» Web

To run the web app on your browser, follow the steps here

πŸ’„ Storybook

Access to the storybook.

To run the storybook on your browser, follow the steps here

πŸ‹ Allure report

Access to the Allure report.

Development

Debugging

Starting in react-native 0.76, we use the integrated dev tools

Standards

In the doc/ folder you will find the dev standards the team members follow:

To add a dev standard

Standards can of course be improved and new ones can be added.

  1. Create a pull request with the standard modification/addition (use TEMPLATE.md for addition)
  2. Ask all team members to read your PR

Why: so that the team is aligned on how to code, and the best way to do something is shared within all members

  1. Make sure you got the approval of every member of the team
  2. You can merge :)

Testing

You can run the tests with yarn test. This command will:

  • Run eslint on your project
  • Check the typescript types
  • Run the jest tests

You can run the jest tests in watch mode with:

yarn jest --watch

You can also get the coverage with:

yarn jest --coverage

Local development

πŸ“ Update the API schema If the backend changes the api schema, you will need to update it:
  • pull the swagger-codegen-cli-v3 image: docker pull swaggerapi/swagger-codegen-cli-v3
  • run: yarn generate:api:client
  • or run yarn generate:api:client:silicon on Apple Silicon chips

If the file src/api/gen/.swagger-codegen/VERSION changes, make sure you locally have the desired version of swagger-codegen-cli, otherwise run docker pull swaggerapi/swagger-codegen-cli-v3:3.0.24

To develop with a local API

See the docs to learn how to develop with a local API "superficially".

The other option, more complex, is to create a specific scheme 'Development' with a .env.development file : copy the .env.testing configuration and update the API_BASE_URL setting with you local server address.

Make sure you also overload the BATCH_API_PUBLIC_KEY_ANDROID and BATCH_API_PUBLIC_KEY_IOS variables with the dev values of the testing batch project.

Then copy testing.keystore into development.keystore and testing.keystore.properties into development.keystore.properties. Replace the storeFile value in development.keystore.properties.

Test login credentials

See in Keeper for all testing accounts.

⬇️ Download

To download the testing app, visit Appcenter for iOS and Android. For the staging app, use these links for iOS and Android.

⚠️ Make sure your device is registered in the device list.


Deployment

See doc about deployment process here for the mobile application.

Performances

You can find most of the code related to performance measurement in src/performance.

Mobile performances

Local development

For local development, you can monitor the performances by pressing d in your metro console. The developer menu should popup on your emulator. Press the "performance monitor" button. You can track the JS thread, UI thread and RAM usage on features you are developing.

Deployed versions (Android only)

Thanks to maestro and flashlight, we are able to have a measure for each version of the app in the CI.

Here are examples of runs.

Additionally, you can add a tag e2e perfs to your PR to get the flashlight performance score of your feature.

Here is an example of a flashlight performance report:

===== Aggregated Performance Summary =====
  - Overall Score            82.00 / 100
  - Successful Iterations    10 / 10

===== Averaged Metrics (across all successful runs) =====
  - Average FPS              39
  - Average RAM Usage        250 MB
  - Average Total CPU        51 %

===== Average CPU Usage Per Thread =====
  - UI Thread                4.45 %
  - mqt_js                   0.00 %
  - RenderThread             30.27 %

Production measurements

Starting in the version v344, to measure performances, we have decided to measure the time to interactive of the home (see adr)

At the moment, it would seem that the measure is more reliable on iOS (between 10 and 20 seconds) than on Android (where we are getting ttis of more than 40 seconds). Because of the way we measure the tti (we trigger an event when the Home finishes loading) if any screen interposes itself between the start of the app and the Home, the time the user spends on that screen will be counted in the tti, thus producing erroneous measures. We are still investigating possible issues.

You can find this measure on firebase performance monitor for the different environments:

Web performances

We currently use lighthouse to measure performances on the web app. There is a job that runs lighthouse in the CI once a week, you can see the runs here.

By running the performance test once a week, we have a measure for each version of the app. Here is an example of a report:

--- Lighthouse Performance Summary ---
🟒 Performance Score: 48 / 100
 FCP: 0.8 s
 LCP: 16.6 s
 TBT: 590 ms
 CLS: 0.108
------------------------------------

You can also run lighthouse locally from your browser's dev tools (make sure to run them in incognito mode to avoid extensions degrading performances).