-
Notifications
You must be signed in to change notification settings - Fork 2
Development Notes
As with most node/javascript projects, you'll need to run npm install
after cloning this repository (or your fork) to download it's dependencies.
All the source code can be found in src. As mentioned before the index.html page is a working example and can be used to test changes you make to the source code. The build folder contains the compiled source code (which is used by index.html) and can be built by running npm run build
. The build process will do the following:
-
First it'll run
npm run lint
to make sure all the source code conforms to the JavaScript Standard Style, if it does not then it will throw errors in the console letting you know what line is off (I recommend installing the the JS Standard plugin in your code editor so you spot lint errors while you code rather than having to bounce back and fourth between your editor and console everytime you build) -
Then it'll run
npm run compile-css
which takes oursrc/css/main.css
(which contains our custom syntax highlighting themes) file as well as the codemirror CSS files and bundles them up into a js modulesrc/css/css.js
which is used by thesrc/main.js
to inject the relevant CSS into the page. -
Then it'll run
browserify
to bundle all the source code intobuild/netitor.js
as well asterser
to create the minified buildbuild/netitor.min.js
To make things easier, you can alternatively run npm run watch
which will listen for any changes to any js files in src
and auto-run the build process for you everytime you make changes. NOTE: this only watches chaanges to js files, so any changes to js/css/main.css
requires a manually running npm run build
or npm run compile-css
directly.
The nfo
property in the object passed to callbacks attached to the edu-info
event is populated with data from the json files located in the edu-data folder. These files are generated by running npm run eduscraper
. It is VERY UNLIKELY you will need to run this script. These json files have already been created, so the only reason to run this script is if/when there is new data available in one of the scraped websites and/or if there's been an update to the eduscraper repo (in which case you would first need to delete the node_modules/eduscraper
directory && rerun npm install
to download the latest version before running it).