Skip to content

Development Notes

Nick Briz edited this page Jul 29, 2020 · 11 revisions

setup

As with most node/javascript projects, you'll need to run npm install after cloning this repository (or your fork) to download it's dependencies.

workflow

All the source code can be found in src. As mentioned before the index.html page is a working example and can be used to test changes you make to the source code. The build folder contains the compiled source code (which is used by index.html) and can be built by running npm run build. The build process will do the following:

  1. First it'll run npm run lint to make sure all the source code conforms to the JavaScript Standard Style, if it does not then it will throw errors in the console letting you know what line is off (I recommend installing the the JS Standard plugin in your code editor so you spot lint errors while you code rather than having to bounce back and fourth between your editor and console everytime you build)

  2. Then it'll run npm run compile-css which takes our src/css/main.css (which contains our custom syntax highlighting themes) file as well as the codemirror CSS files and bundles them up into a js module src/css/css.js which is used by the src/main.js to inject the relevant CSS into the page.

  3. Then it'll run browserify to bundle all the source code into build/netitor.js as well as terser to create the minified build build/netitor.min.js

To make things easier, you can alternatively run npm run watch which will listen for any changes to any js files in src and auto-run the build process for you everytime you make changes. NOTE: this only watches chaanges to js files, so any changes to js/css/main.css requires a manually running npm run build or npm run compile-css directly.

edu-info

The nfo property in the object passed to callbacks attached to the edu-info event is populated with data from the json files located in the edu-data folder. These files are generated by running npm run eduscraper. It is VERY UNLIKELY you will need to run this script. These json files have already been created, so the only reason to run this script is if/when there is new data available in one of the scraped websites and/or if there's been an update to the eduscraper repo (in which case you would first need to delete the node_modules/eduscraper directory && rerun npm install to download the latest version before running it).

Clone this wiki locally