Getting Started

Parse Server is an open source version of the Parse backend that can be deployed to any infrastructure that can run Node.js. You can find the source on the GitHub repo.

  • Parse Server is not dependent on the hosted Parse backend.
  • Parse Server uses MongoDB directly, and is not dependent on the Parse hosted database.
  • You can migrate an existing app to your own infrastructure.
  • You can develop and test your app locally using Node.


  • Node 8 or newer
  • MongoDB version 3.6
  • Python 2.x (For Windows users, 2.7.1 is the required version)
  • For deployment, an infrastructure provider like Heroku or AWS

The fastest and easiest way to get started is to run MongoDB and Parse Server locally. Use the bootstrap script to set up Parse Server in the current directory.

$ sh <(curl -fsSL
$ npm install -g mongodb-runner
$ mongodb-runner start
$ npm start

You can use any arbitrary string as your application id and master key. These will be used by your clients to authenticate with the Parse Server.

That’s it! You are now running a standalone version of Parse Server on your machine.

Saving your first object

Now that you’re running Parse Server, it is time to save your first object. We’ll use the REST API, but you can easily do the same using any of the Parse SDKs. Run the following:

curl -X POST \
-H "X-Parse-Application-Id: APPLICATION_ID" \
-H "Content-Type: application/json" \
-d '{"score":123,"playerName":"Sean Plott","cheatMode":false}' \

You should get a response similar to this:

  "objectId": "2ntvSpRGIK",
  "createdAt": "2016-03-11T23:51:48.050Z"

You can now retrieve this object directly (make sure to replace 2ntvSpRGIK with the actual objectId you received when the object was created):

$ curl -X GET \
  -H "X-Parse-Application-Id: APPLICATION_ID" \
// Response
  "objectId": "2ntvSpRGIK",
  "score": 123,
  "playerName": "Sean Plott",
  "cheatMode": false,
  "updatedAt": "2016-03-11T23:51:48.050Z",
  "createdAt": "2016-03-11T23:51:48.050Z"

Keeping tracks of individual object ids is not ideal, however. In most cases you will want to run a query over the collection, like so:

$ curl -X GET \
  -H "X-Parse-Application-Id: APPLICATION_ID" \
// The response will provide all the matching objects within the `results` array:
  "results": [
      "objectId": "2ntvSpRGIK",
      "score": 123,
      "playerName": "Sean Plott",
      "cheatMode": false,
      "updatedAt": "2016-03-11T23:51:48.050Z",
      "createdAt": "2016-03-11T23:51:48.050Z"

To learn more about using saving and querying objects on Parse Server, check out the documentation for the SDK you will be using in your app.

Connect your app to Parse Server

Parse provides SDKs for all the major platforms. Refer to the rest of the Parse Server guide to learn how to connect your app to Parse Server.

Running Parse Server elsewhere

Once you have a better understanding of how the project works, please refer to the Deploying Parse Server section to learn more about additional ways of running Parse Server.

Want to contribute to this doc? Edit this section.


Parse Server let you use MongoDB or Postgres as a database.

The prefered database is MongoDB but Postgres is a great option if you’re starting a new project and you expect to have a stable Schema.


If you have not used MongoDB before, we highly recommend familiarizing yourself with it first before proceeding.

The Mongo requirements for Parse Server are:

  • MongoDB version 3.6
  • An SSL connection is recommended (but not required).

If this is your first time setting up a production MongoDB instance, we recommend using either mLab or ObjectRocket. These are database-as-a-service companies which provide fully managed MongoDB instances, and can help you scale up as needed.

When using MongoDB with your Parse app, you need to manage your indexes yourself. You will also need to size up your database as your data grows.

If you are planning to run MongoDB on your own infrastructure, we highly recommend using the RocksDB Storage Engine.

In order to allow for better scaling of your data layer, it is possible to direct queries to a mongodb secondary for read operations. See: Mongo Read Preference.


The Postgres requirements for Parse Server are:

  • Postgres version 9.5
  • PostGIS extensions 2.3

The postgres database adapter will be automatically loaded when you pass a valid postgres URL, for example: postgres://localhost:5432. The available configuration options through the URL are:


Details about the configuration options can be found on pg-promise. Some useful combinations are below:

  • SSL with verification - postgres://localhost:5432/db?ca=/path/to/file
  • SSL with no verification - postgres://localhost:5432/db?ssl=true&rejectUnauthorized=false


  • Join tables are resolved in memory, there is no performance improvements using Postgres over MongoDB for relations or pointers.
  • Mutating the schema implies running ALTER TABLE, therefore we recommend you setup your schema when your tables are not full.
  • Properly index your tables to maximize the performance.
  • The postgres URL for 4.2.0 and below only supports the following configuration options:

Want to contribute to this doc? Edit this section.


Parse Server is meant to be mounted on an Express app. Express is a web framework for Node.js. The fastest way to get started is to clone the Parse Server repo, which at its root contains a sample Express app with the Parse API mounted.

The constructor returns an API object that conforms to an Express Middleware. This object provides the REST endpoints for a Parse app. Create an instance like so:

var api = new ParseServer({
  databaseURI: 'mongodb://your.mongo.uri',
  cloud: './cloud/main.js',
  appId: 'myAppId',
  fileKey: 'myFileKey',
  masterKey: 'mySecretMasterKey',
  push: { ... }, // See the Push wiki page
  filesAdapter: ...,

The parameters are as follows:

  • databaseURI: Connection string URI for your MongoDB.
  • cloud: Path to your app’s Cloud Code.
  • appId: A unique identifier for your app.
  • fileKey: A key that specifies a prefix used for file storage. For migrated apps, this is necessary to provide access to files already hosted on Parse.
  • masterKey: A key that overrides all permissions. Keep this secret.
  • clientKey: The client key for your app. (optional)
  • restAPIKey: The REST API key for your app. (optional)
  • javascriptKey: The JavaScript key for your app. (optional)
  • dotNetKey: The .NET key for your app. (optional)
  • push: An object containing push configuration. See Push
  • filesAdapter: An object that implements the FilesAdapter interface. For example, the S3 files adapter
  • auth: Configure support for 3rd party authentication.
  • maxUploadSize: Maximum file upload size. Make sure your server does not restrict max request body size (e.g. nginx.conf client_max_body_size 100m;)

The Parse Server object was built to be passed directly into app.use, which will mount the Parse API at a specified path in your Express app:

var express = require('express');
var ParseServer = require('parse-server').ParseServer;

var app = express();
var api = new ParseServer({ ... });

// Serve the Parse API at /parse URL prefix
app.use('/parse', api);

var port = 1337;
app.listen(port, function() {
  console.log('parse-server-example running on port ' + port + '.');

And with that, you will have a Parse Server running on port 1337, serving the Parse API at /parse.

Want to contribute to this doc? Edit this section.


Parse Server does not require the use of client-side keys. This includes the client key, JavaScript key, .NET key, and REST API key. The Application ID is sufficient to secure your app.

However, you have the option to specify any of these four keys upon initialization. Upon doing so, Parse Server will enforce that any clients passing a key matches. The behavior is consistent with hosted Parse.

Read-Only masterKey

Starting parse-server 2.6.5, it is possible to specify a readOnlyMasterKey. When using this key instead of the masterKey, the server will perform all read operations as if they were executing with the masterKey but will fail to execute any write operation.

This key is especially powerful when used with parse-dashboard. Please refer to Parse Dashboard’s documentation for more information.

Want to contribute to this doc? Edit this section.

Using Parse SDKs with Parse Server

To use a Parse SDK with Parse Server, change the server URL to your Parse API URL. For example, if you have Parse Server running locally mounted at /parse:

iOS / OS X / watchOS / tvOS


let configuration = ParseClientConfiguration {
    $0.applicationId = "YOUR_APP_ID"
    $0.clientKey = ""
    $0.server = "http://localhost:1337/parse"
Parse.initialize(with: configuration)


[Parse initializeWithConfiguration:[ParseClientConfiguration configurationWithBlock:^(id<ParseMutableClientConfiguration> configuration) {
   configuration.applicationId = @"YOUR_APP_ID";
   configuration.clientKey = @"";
   configuration.server = @"http://localhost:1337/parse";


Parse.initialize(new Parse.Configuration.Builder(myContext)


Parse.serverURL = 'http://localhost:1337/parse'


ParseClient.initialize(new ParseClient.Configuration {
    ApplicationId = "YOUR_APP_ID",
    Server = "http://localhost:1337/parse/"


ParseClient::initialize('YOUR_APP_ID', 'YOUR_CLIENT_KEY', 'YOUR_MASTER_KEY');
ParseClient::setServerURL('http://localhost:1337', 'parse'); // server url & mount path passed separately
Want to contribute to this doc? Edit this section.

Deploying Parse Server

The fastest and easiest way to start using Parse Server is to run MongoDB and Parse Server locally. Once you have a better understanding of how the project works, read on to learn how to deploy Parse Server to major infrastructure providers. If your provider is not listed here, please take a look at the list of articles from the community as someone may have already written a guide for it.

Deploying to Heroku and mLab

Heroku and mLab provide an easy way to deploy Parse Server, especially if you’re new to managing your own backend infrastructure.

Here are the steps:

  1. Create a repo for your Express app with the Parse Server middleware mounted (you can use our sample project, or start your own).
  2. Create a Heroku account (if you don’t have one already) and use the Heroku Toolbelt to log in and prepare a new app in the same directory as your Express app. Take a look at Heroku’s Getting Started with Node.js guide for more details.
  3. Use the mLab addon: heroku addons:create mongolab:sandbox (or, you can create a Mongo instance yourself, either directly with mLab or your own box)
  4. Use heroku config and note the URI provided by mLab under the var MONGOLAB_URI
  5. Copy this URI and set it as a new config variable: heroku config:set DATABASE_URI=mongodb://...
  6. Deploy it: git push heroku master

You may also refer to the Heroku Dev Center article on Deploying a Parse Server to Heroku.

Deploying to Glitch and mLab

Before you start, you’ll need:

  • mLab account (for free MongoDB)

Step 1: Creating your database on mLab

mLab provides a Database-as-a-Service for MongoDB. They include a free tier for small sandbox databases. Create an account on mLab and then use the Single-node, Sandbox plan to get a (free) database up and running. Within the mLab wizard, you’ll need to be sure to create a user that has access to connect to the new database. Upon completion, you should be able to construct a Mongo DB connection string like the following:

mongodb://yourusername:[email protected]:yourdatabaseport/yourdatabasename

Step 2: Running parse-server-example on Glitch

Glitch provides an easy way to instantly create and deploy Node.js applications for free. We will use it to run the parse-server-example application.

To get the example server up and running for quick testing, you can simply click the button below:

Remix on Glitch

Now that the import is complete, we’ll need to make two small changes to the 🗝️.env file, which stores private environment variables.

It should look like the following:

# Environment Config

# store your secrets and config variables in here
# only invited collaborators will be able to see your .env values

# reference these in your code with process.env.SECRET


# note: .env is a shell file so there can't be spaces around =

First, change the DATABASE_URI value to your mLab connection string from step 1.

Next, change the project-name portion of the SERVER_URL value to the name of the project that was created. So, if clicking the button creates, your SERVER_URL value would be

You can delete the SECRET and MADE_WITH lines, but there’s no harm in leaving them there.

It is important, for this tutorial, to leave the APP_ID as myAppId as the “test” page hard-codes that and expects that value.

If you’d like to keep this project, create an account on Glitch. Projects created as an anonymous user expire after five days. You can read more about the technical restrictions on free Glitch projects here.

Step 3: Testing

Once you’re finished making your changes to your 🗝️.env file, Glitch will automatically build and deploy your application. If you use the Logs feature within Glitch (click on ToolsLogs), you should see this when your app is deployed:

parse-server-example running on port 3000.

You should then be able to use the “Show” button to launch the application in the browser and get to a page that urges you to star the parse-server GitHub repository. To access the test harness page, add a trailing /test to your URL. This should take you to a page that will allow you to exercise a few parts of the Parse Server Javascript SDK and create a dummy collection and record in your MongoDB. If you’re able to complete steps one through three on this test page, Parse Server is up and running. Optionally, you can go back to and take a look at the data that was stored by the test harness to get a feel for how Parse Server stores data in MongoDB.

Deploying on Back4App

Back4App provides an easy way to deploy and host your Parse Server Apps.

Here are the steps:

  1. Create a free Back4App Account.
  2. Create a new Parse App.
  3. Go to your App Core Settings Menu and check your App Keys and Database URI.

If you need to migrate your local Parse Server to Back4App you can follow these guidelines.

Want to contribute to this doc? Edit this section.

Push Notifications

Parse Server provides basic push notification functionality for iOS, macOS, tvOS and Android. With this feature, you can:

However, there are a few caveats:

  • Does not support super high throughput since it does not employ a job queue system
  • Client push is not supported. You can only use masterKey to send push notifications
  • Delivery reports are not supported
  • Scheduled push is not supported


We support most of the sending options. Check the detailed doc here. Parse Server supports the following:

  • channels to target installations by channels
  • where to target installations by ParseQuery
  • priority under data for iOS push priority
  • push_type under data for iOS push type
  • alert under data for notification message
  • number badge under data for iOS badge number
  • sound under data for iOS sound
  • content-available under data for iOS background job
  • category under data for iOS category
  • title under data for Android notification title
  • uri under data for Android notification launched URI
  • custom data under data for ios and Android
  • Increment badge under data for iOS and Android badge number

Here is the list of sending options we do not support yet:

  • push_time for scheduled push

Push Notifications Quick Start

1. Prepare APNS and FCM Credentials

You will need to obtain some credentials from FCM and APNS in order to send push notifications.


If you are setting up push notifications on iOS, tvOS or macOS for the first time, we recommend you visit the’s Push Notifications tutorial or’s iOS Push tutorial to help you obtain a production Apple Push Certificate. Parse Server supports the PFX (.p12) file exported from Keychain Access. Parse Server also supports the push certificate and key in .pem format. Token-based authentication instead of a certificate is supported as well.

FCM (Android)

To get your FCM API key, go to the Firebase console and navigate to the project. Navigate to the settings of the project, and within the “Cloud Messaging” tab, you will find it, labeled “Server key”

2. Configure Parse Server

When initializing Parse Server, you should pass an additional push configuration. For example

var server = new ParseServer({
  databaseURI: '...',
  cloud: '...',
  appId: '...',
  masterKey: '...',
  push: {
    android: {
      apiKey: '...'
    ios: {
      pfx: '/file/path/to/XXX.p12',
      passphrase: '', // optional password to your p12/PFX
      bundleId: '',
      production: false

The configuration format is

push: {
  android: {
    apiKey: '' // The Server API Key of FCM
  ios: {
    pfx: '', // The filename of private key and certificate in PFX or PKCS12 format from disk  
    passphrase: '', // optional password to your p12
    cert: '', // If not using the .p12 format, the path to the certificate PEM to load from disk
    key: '', // If not using the .p12 format, the path to the private key PEM to load from disk
    bundleId: '', // The bundle identifier associated with your app
    production: false // Specifies which APNS environment to connect to: Production (if true) or Sandbox

For iOS, if you would like to use token-based authentication instead of certificates, you should use the following configuration format

push: {
  ios: {
    token: {
      key: '/file/path/to/AuthKey_XXXXXXXXXX.p8',
      keyId: "XXXXXXXXXX",
      teamId: "YYYYYYYYYY" // The Team ID for your developer account
    topic: 'com.domain.appname', // The bundle identifier associated with your app
    production: false

If you would like to support both the dev and prod certificates, you can provide an array of configurations like

push: {
  ios: [
      pfx: '', // Dev PFX or P12
      bundleId: '',
      production: false // Dev
      pfx: '', // Prod PFX or P12
      bundleId: '',  
      production: true // Prod
  tvos: [
    // ...
  osx: [
    // ...

The configuration for macOS and tvOS works exactly as for iOS. Just add an additional configuration for each plattform under the appropriate key. Please note the key for macOS is osx and for tvOS is tvos. If you need to support both the dev and prod certificates, you can do that for all Apple plattforms like described above.

var server = new ParseServer({
  databaseURI: '...',
  cloud: '...',
  appId: '...',
  masterKey: '...',
  push: {
    android: {
      apiKey: '...'
    ios: {
      pfx: '/file/path/to/XXX.p12',
      passphrase: '', // optional password to your p12/PFX
      bundleId: '',
      production: false
    osx: {
      pfx: '/file/path/to/XXX.p12',
      passphrase: '', // optional password to your p12/PFX
      bundleId: '',
      production: false
    tvos: {
      pfx: '/file/path/to/XXX.p12',
      passphrase: '', // optional password to your p12/PFX
      bundleId: '',
      production: false

If you have a list of certificates, Parse Server’s strategy on choosing them is trying to match installationsappIdentifier with bundleId first. If it can find some valid certificates, it will use those certificates to establish the connection to APNS and send notifications. If it can not find, it will try to send the notifications with all certificates. Prod certificates first, then dev certificates.

3. Configure Client Apps

Configure an app which connects to Parse Server. We have provided a detailed list of steps to configure your iOS and Android clients.

4. Send Push Notifications

Currently Parse Server only supports sending push notifications by your masterKey. The easiest way to do that is to curl:

curl -X POST \
  -H "X-Parse-Application-Id: you_app_id" \
  -H "X-Parse-Master-Key: your_master_key" \
  -H "Content-Type: application/json" \
  -d '{
        "where": {
          "deviceType": {
            "$in": [
        "data": {
          "title": "The Shining",
          "alert": "All work and no play makes Jack a dull boy."
      }'\   http://your_server_address/parse/push

Push notifications can also be sent from cloud code:

// With promises
  where: { ... },
  data: { ... }
}, { useMasterKey: true })
.then(function() {
  // Push sent!
}, function(error) {
  // There was a problem :(

// With Legacy Backbone callbacks
  where: query,
  data: {
    alert: 'Test',
    badge: 1,
    sound: 'default'
}, {
  useMasterKey: true,
  success: function() {
    // Push sent!
  error: function(error) {
    // There was a problem :(

After sending this to your Parse Server, you should see the push notifications show up on your devices.

Note: The iOS simulator cannot receive push notifications. You must run iOS apps on an iOS device.

In your Parse Server logs, you can see something similar to

// FCM request and response
{"request":{"params":{"priority":"normal","data":{"time":"2016-02-10T03:21:59.065Z","push_id":"NTDgWw7kp8","data":"{\"alert\":\"All work and no play makes Jack a dull boy.\"}"}}},"response":{"multicast_id":5318039027588186000,"success":1,"failure":0,"canonical_ids":0,"results":[{"registration_id":"APA91bEdLpZnXT76vpkvkD7uWXEAgfrZgkiH_ybkzXqhaNcRw1KHOY0s9GUKNgneGxe2PqJ5Swk1-Vf852kpHAP0Mhoj5wd1MVXpRsRr_3KTQo_dkNd_5wcQ__yWnWLxbeM3kg_JziJK","message_id":"0:1455074519347821%df0f8ea7f9fd7ecd"}]}}
APNS Connected
APNS Notification transmitted to:7a7d2864598e1f65e6e02135245b7daf8ea510514e6376f072dc29d53facaa41

These logs mean that the FCM and APNS connections are working.

Push Adapter

Parse Server provides a PushAdapter which abstracts the way we actually send push notifications. The default implementation is ParsePushAdapter, which uses FCM for Android push and APNS for iOS push. However, if you want to use other push providers, you can implement your own PushAdapter. Your adapter needs to implement send(data, installations), which is used for sending data to the installations. You can use ParsePushAdapter as a reference. After you implement your PushAdapter, you can pass that instance to Parse Server like this

var server = new ParseServer({
  databaseURI: '...',
  cloud: '...',
  appId: '...',
  masterKey: '...',
  push: {
    adapter: your_adapter

By doing this, after Parse Server decodes the push API request and runs the installation query, your PushAdapter’s send(data, installations) will be called and is responsible for sending the notifications. If you provide your custom PushAdapter, the default ParsePushAdapter will be ignored.

Future Improvements

The current solution provides a good starting point for push notifications. We have a lot of ideas to improve the feature:

  • Support more platforms
  • Support more sending options
  • Support more push providers
  • Support scheduled pushes
  • Support delivery report and error handling
  • Support job queue and benchmarking

If you’re interested in any of these features, don’t hesitate to jump in and send a PR to the repo. We would love to work with you!


Silent Notifications

If you are seeing situations where silent notifications are failing to deliver, please ensure that your payload is setting the content-available attribute to Int(1) (or just 1 as in javascript) and not “1”. This value will be explicitly checked.

When sending a push notification to APNs you also have to set push_type to background for delivering silent notifications to devices running iOS 13 and later, or watchOS 6 or later.


Want to contribute to this doc? Edit this section.

Configuring your clients to receive Push Notifications

The following will guide you through the necessary steps to configure your iOS and Android client apps to receive push notifications from Parse Server. If you haven’t yet, you will first need to prepare your APNS and FCM credentials as documented in Step 1 of the Push Notifications Quick Start.

iOS Apps

Register Device for Push Notifications

Open up your AppDelegate.swift, AppDelegate.m, or AppDelegate.cs file and make your app register for remote notifications by adding the following in your application:didFinishLaunchingWithOptions: function:

// Swift
let types: UIUserNotificationType = [.Alert, .Badge, .Sound]
let settings = UIUserNotificationSettings(forTypes: types, categories: nil)
// Objective-C
UIUserNotificationType userNotificationTypes = (UIUserNotificationTypeAlert |
                                                UIUserNotificationTypeBadge |
UIUserNotificationSettings *settings = [UIUserNotificationSettings settingsForTypes:userNotificationTypes
[application registerUserNotificationSettings:settings];
[application registerForRemoteNotifications];
// Xamarin
UIUserNotificationType notificationTypes = (UIUserNotificationType.Alert |
                                            UIUserNotificationType.Badge |
var settings = UIUserNotificationSettings.GetSettingsForTypes(notificationTypes,
                                                              new NSSet(new string[] { }));

// Handle Push Notifications
ParsePush.ParsePushNotificationReceived += (object sender, ParsePushNotificationEventArgs args) => {
  // Process Push Notification payload here.

Store the device token and handle the UI for notifications by adding the following to your main app delegate:

// Swift
func application(application: UIApplication, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: NSData) {
    let installation = PFInstallation.currentInstallation()

func application(application: UIApplication, didFailToRegisterForRemoteNotificationsWithError error: NSError) {
    if error.code == 3010 {
        print("Push notifications are not supported in the iOS Simulator.")
    } else {
        print("application:didFailToRegisterForRemoteNotificationsWithError: %@", error)

func application(application: UIApplication, didReceiveRemoteNotification userInfo: [NSObject : AnyObject]) {
// Objective-C
- (void)application:(UIApplication *)application didRegisterForRemoteNotificationsWithDeviceToken:(NSData *)deviceToken {
  // Store the deviceToken in the current installation and save it to Parse.
  PFInstallation *currentInstallation = [PFInstallation currentInstallation];
  [currentInstallation setDeviceTokenFromData:deviceToken];
  [currentInstallation saveInBackground];

- (void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo {
  [PFPush handlePush:userInfo];
// Xamarin
public override void DidRegisterUserNotificationSettings(UIApplication application,
    UIUserNotificationSettings notificationSettings) {

public override void RegisteredForRemoteNotifications(UIApplication application,
    NSData deviceToken) {
  ParseInstallation installation = ParseInstallation.CurrentInstallation;


public override void ReceivedRemoteNotification(UIApplication application,
    NSDictionary userInfo) {
  // We need this to fire userInfo into ParsePushNotificationReceived.
Compile and run!

If you configured your app correctly, installation objects will automatically be saved to Parse Server when you run your app. You can run this curl command to verify:

curl -X GET \
  -H "X-Parse-Application-Id: YOUR_APP_ID" \
  -H "X-Parse-Master-Key: YOUR_MASTER_KEY" \
Proceed to Step 4.

Android apps

FCM Push Setup

Add this in your root build.gradle file (not your module build.gradle file):

allprojects {
	repositories {
		maven { url "" }

Then, add the library to your project build.gradle

dependencies {
    implementation ""

with the latest version being

Then, follow Google’s docs for setting up an Firebase app. Although the steps are different for setting up FCM with Parse, it is also a good idea to read over the Firebase FCM Setup. You will need to do the following:

  • Add app to Firebase console.
  • Add the Gradle plugin (see setup guide)
  • Download and add google-services.json to your app/ dir.
  • Remove GcmBroadcastReceiver, PushService, com.parse.push.gcm_sender_id if upgrading from GCM.
  • Added ParseFirebaseInstanceIdService and ParseFirebaseMessagingService to your AndroidManifest.xml file (as shown below):

You will need to register some services in your manifest, specifically:

        <action android:name="" />

Additional, you will register:

        <action android:name=""/>

After these services are registered in the Manifest, you then need to register the push broadcast receiver:

        <action android:name="com.parse.push.intent.RECEIVE" />
        <action android:name="com.parse.push.intent.DELETE" />
        <action android:name="com.parse.push.intent.OPEN" />

Custom Notifications

If you need to customize the notification that is sent out from a push, you can do so by extending ParsePushBroadcastReceiver with your own class and registering it instead in the Manifest.

Register Device for Push Notifications

Create an Installation object by adding the following to the onCreate method of your Application class:

// Native:
public void onCreate() {
  // ...
// Xamarin: Application.cs

// IMPORTANT: Change "parsexamarinpushsample" to match your namespace.
[Application(Name = "parsexamarinpushsample.ParseApplication")]
class ParseApplication : Application {
  // ...

  public override void OnCreate() {

    // ...

    ParsePush.ParsePushNotificationReceived += ParsePush.DefaultParsePushNotificationReceivedHandler;
Compile and run!

If you configured your app correctly, installation objects will automatically be saved to Parse Server when you run your app. You can run this curl command to verify:

curl -X GET \
  -H "X-Parse-Application-Id: YOUR_APP_ID" \
  -H "X-Parse-Master-Key: YOUR_MASTER_KEY" \
Proceed to Step 4.

Note that GCM push support is deprecated and FCM should be used instead, but instructions for GCM setup can be found here

Want to contribute to this doc? Edit this section.

Class Level Permissions

Class level permissions are a security feature from that allows one to restrict access on a broader way than the ACL based permissions.


If you want to restrict access to a full class to only authenticated users, you can use the requiresAuthentication class level permission. For example, you want to allow your authenticated users to find and get objects from your application and your admin users to have all privileges, you would set the CLP:

// PUT http://localhost:1337/schemas/:className
// Set the X-Parse-Application-Id and X-Parse-Master-Key header
// body:
    "find": {
      "requiresAuthentication": true,
      "role:admin": true
    "get": {
      "requiresAuthentication": true,
      "role:admin": true
    "create": { "role:admin": true },
    "update": { "role:admin": true },
    "delete": { "role:admin": true },

Note that this is in no way securing your content. If you allow anyone to log in to your server, any client will be able to query this object.

Want to contribute to this doc? Edit this section.

Configuring File Adapters

Parse Server allows developers to choose from several options when hosting files:

GridStoreAdapter is used by default and requires no setup, but if you’re interested in using S3 or Google Cloud Storage, additional configuration information is available below.

When using files on Parse, you will need to use the publicServerURL option in your Parse Server config. This is the URL that files will be accessed from, so it should be a URL that resolves to your Parse Server. Make sure to include your mount point in this URL.

Configuring S3Adapter

If you’d like to use Amazon S3, follow these instructions to configure Parse Server to use S3Adapter.

Set up your bucket and permissions

First you will create a bucket in S3 to hold these files.

  1. Log into your AWS account or create a new one.
  2. Head to the S3 service and choose Create Bucket
  3. Fill out a unique Bucket Name and click Create. The bucket name should not contain any period ‘.’ for directAccess to work. All other defaults are OK.
  4. Now head to the Identity and Access Management (IAM) service.
  5. Click the Users tab, then Create New User.
  6. Fill out at least one user name and make sure Generate an access key for each user is selected before clicking Create.
  7. Make sure to Download Credentials on the next screen.
  8. Now select the Policies tab, then Create Policy.
  9. Select Create Your Own Policy, fill out a Policy Name.
  10. Copy the following config in Policy Document, changing BUCKET_NAME for the name of the bucket you created earlier. (note: this is a little more permissive than Parse Server needs, but it works for now)

        "Version": "2012-10-17",
        "Statement": [
                "Effect": "Allow",
                "Action": [
                "Resource": [
  11. Make sure to Validate Policy first, then click Create Policy.
  12. Go back to the Users tab and select the user you created earlier.
  13. In Permissions, select Attach Policy and find the policy we just created to attach it.

Configuration options

Writing to your Amazon S3 bucket from Parse Server is as simple as configuring and using the S3 files adapter.

Using environment variables

If you’re running a standalone Parse Server, you can use the following environment variables to configure the S3 adapter:

Variable Name Description Notes
PARSE_SERVER_FILES_ADAPTER Set this variable to ‘./Files/S3Adapter.js’. Required
S3_ACCESS_KEY The AWS access key for a user that has the required permissions. Required
S3_SECRET_KEY The AWS secret key for the user. Required
S3_BUCKET The name of your S3 bucket. Needs to be globally unique in all of S3. Required
S3_REGION The AWS region to connect to. Optional. Default: ‘us-east-1’
S3_BUCKET_PREFIX Create all the files with the specified prefix added to the filename. Can be used to put all the files for an app in a folder with ‘folder/’. Optional.
S3_DIRECT_ACCESS Whether reads are going directly to S3 or proxied through your Parse Server. If set to true, files will be made publicly accessible, and reads will not be proxied. Optional. Default: false

Passing as options

If you’re using Node.js/Express:

var S3Adapter = require('parse-server').S3Adapter;

var api = new ParseServer({
  databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
  appId: process.env.APP_ID || 'APPLICATION_ID',
  masterKey: process.env.MASTER_KEY || 'MASTER_KEY',
  filesAdapter: new S3Adapter(
    {directAccess: true}

Don’t forget to change S3_ACCESS_KEY, S3_SECRET_KEY and S3_BUCKET to their correct value.

S3Adapter constructor options
new S3Adapter(accessKey, secretKey, bucket, options)
Parameter Description Notes
accessKey The AWS access key for a user that has the required permissions Required.
secretKey The AWS secret key for the user Required.
bucket The name of your S3 bucket. Required.
options JavaScript object (map) that can contain:  
region Key in options. Set the AWS region to connect to. Optional. Default: us-east-1
bucketPrefix Key in options. Set to create all the files with the specified prefix added to the filename. Can be used to put all the files for an app in a folder with ‘folder/’. Optional. Default: null
directAccess Key in options. Controls whether reads are going directly to S3 or proxied through your Parse Server. If set to true, files will be made publicly accessible, and reads will not be proxied. Optional. Default: false
baseUrl Key in options. The base URL the file adapter uses to determine the file location for direct access. Optional. Default: null. To be used when directAccess=true. When set the file adapter returns a file URL in format baseUrl/bucketPrefix + filename. Example for baseUrl='' and bucketPrefix='prefix_' the returned file location is
baseUrlDirect Key in options. Is true if the file adapter should ignore the bucket prefix when determining the file location for direct access. Optional. Default: false. To be used when directAccess=true and baseUrl is set. When set to true, the file adapter returns a file URL in format baseUrl/filename. Example for baseUrl='' and baseUrlDirect=true the returned file location is
globalCacheControl Key in options. The Cache-Control http header to set in the file request. Optional. Default: null. Example: public, max-age=86400 for 24 hrs caching. More info here.

Configuring GCSAdapter

Unlike the S3 adapter, you must create a new Cloud Storage bucket, as this is not created automatically. See the Google Cloud guide on Authentication for more details.

To generate a private key in the Cloud Platform Console follow these instructions.


Starting 2.2.6, GCS Adapter is not provided by default by parse-server. To install run:

npm install --save parse-server-gcs-adapter

Configuration options

Writing to your Google Cloud Storage bucket from Parse Server is as simple as configuring and using the GCS files adapter.

Using environment variables

You can use Google Cloud Storage to host your static files by setting the following environment variables:

Variable Name Description Notes
PARSE_SERVER_FILES_ADAPTER Set this variable to ‘parse-server-gcs-adapter’. Required.
GCP_PROJECT_ID The project ID from the Google Developer’s Console. Required.
GCP_KEYFILE_PATH Full path to the a .json, .pem, or .p12 key downloaded from the Google Developers Console. Required.
GCS_BUCKET The name of your GCS bucket. Required.
GCS_BUCKET_PREFIX Create all the files with the specified prefix added to the filename. Can be used to put all the files for an app in a folder with ‘folder/’. Optional.
GCS_DIRECT_ACCESS Whether reads are going directly to GCS or proxied through your Parse Server. Optional. Default: false

Passing as options

If you’re using Node.js/Express:

var GCSAdapter = require('parse-server-gcs-adapter');

var api = new ParseServer({
  databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
  appId: process.env.APP_ID || 'APPLICATION_ID',
  masterKey: process.env.MASTER_KEY || 'MASTER_KEY',
  filesAdapter: new GCSAdapter(
    {directAccess: true}
S3Adapter configuration for Digital Ocean Spaces

Spaces is an S3 equivalent prodivided by Digital Ocean. It’s use the same api as S3 so you can use it with the S3 Adapter. You just need to change the AWS Endpoint to point to your Spaces endpoint.

var S3Adapter = require('parse-server').S3Adapter;
var AWS = require("aws-sdk");

//Set Digital Ocean Spaces EndPoint
const spacesEndpoint = new AWS.Endpoint(process.env.SPACES_ENDPOINT);
//Define S3 options
var s3Options = {
  bucket: process.env.SPACES_BUCKET_NAME,
  baseUrl: process.env.SPACES_BASE_URL,
  region: process.env.SPACES_REGION,
  directAccess: true,
  globalCacheControl: "public, max-age=31536000",
  bucketPrefix: process.env.SPACES_BUCKET_PREFIX,
  s3overrides: {
    accessKeyId: process.env.SPACES_ACCESS_KEY,
    secretAccessKey: process.env.SPACES_SECRET_KEY,
    endpoint: spacesEndpoint

var s3Adapter = new S3Adapter(s3Options);

var api = new ParseServer({
  databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
  appId: process.env.APP_ID || 'APPLICATION_ID',
  masterKey: process.env.MASTER_KEY || 'MASTER_KEY',
  filesAdapter: s3Adapter
GCSAdapter constructor options
new GCSAdapter(projectId, keyfilePath, bucket, options)
Parameter Description Notes
projectId The project ID from the Google Developer’s Console. Required.
keyfilePath Full path to the a .json, .pem, or .p12 key downloaded from the Google Developers Console. Required.
bucket The name of your GCS bucket. Required.
options JavaScript object (map) that can contain:  
bucketPrefix Key in options. Set to create all the files with the specified prefix added to the filename. Can be used to put all the files for an app in a folder with ‘folder/’. Optional. Default: ‘’
directAccess Key in options. Controls whether reads are going directly to GCS or proxied through your Parse Server. Optional. Default: false
Want to contribute to this doc? Edit this section.

Configuring Cache Adapters

By default, parse-server provides an internal cache layer to speed up schema verifications, user, roles and sessions lookup.

In some cases, in distributed environment, you may want to use a distributed cache like Redis.

parse-server comes with an optional redis cache adapter.

Those cache adapters can be cleaned at anytime internally, you should not use them to cache data and you should let parse-server manage their data lifecycle.


var RedisCacheAdapter = require('parse-server').RedisCacheAdapter;
var redisOptions = {url: 'YOUR REDIS URL HERE'}
var redisCache = new RedisCacheAdapter(redisOptions);

var api = new ParseServer({
  databaseURI: databaseUri || 'mongodb://localhost:27017/dev',
  appId: process.env.APP_ID || 'APPLICATION_ID',
  masterKey: process.env.MASTER_KEY || 'MASTER_KEY',
  cacheAdapter: redisCache,

The redisOptions are passed directly to the redis.createClient method. For more information refer to the redis.createClient documentation.

Note that at the moment, only passing a single argument is supported.

The cache adapter can flush the redis database at anytime. It is best to not use the same redis database for other services. A different redis database can be chosen by providing a different database number to redisOptions. By default redis has 16 databases (indexed from 0 to 15).

Want to contribute to this doc? Edit this section.

Live Queries

Parse.Query is one of the key concepts for Parse. It allows you to retrieve Parse.Objects by specifying some conditions, making it easy to build apps such as a dashboard, a todo list or even some strategy games. However, Parse.Query is based on a pull model, which is not suitable for apps that need real-time support.

Suppose you are building an app that allows multiple users to edit the same file at the same time. Parse.Query would not be an ideal tool since you can not know when to query from the server to get the updates.

To solve this problem, we introduce Parse LiveQuery. This tool allows you to subscribe to a Parse.Query you are interested in. Once subscribed, the server will notify clients whenever a Parse.Object that matches the Parse.Query is created or updated, in real-time.

Parse LiveQuery contains two parts, the LiveQuery server and the LiveQuery clients. In order to use live queries, you need to set up both of them.

Server Setup

The LiveQuery server should work with a Parse Server. The easiest way to setup the LiveQuery server is to make it run with the Parse Server in the same process. When you initialize the Parse Server, you need to define which Parse.Object classes you want to enable LiveQuery like this:

let api = new ParseServer({
  liveQuery: {
    classNames: ['Test', 'TestAgain']

After that, you need to initialize a LiveQuery server like this:

// Initialize a LiveQuery server instance, app is the express app of your Parse Server
let httpServer = require('http').createServer(app);
var parseLiveQueryServer = ParseServer.createLiveQueryServer(httpServer);

The ws protocol URL of the LiveQuery server is the hostname and port which the httpServer is listening to. For example, if the httpServer is listening to localhost:8080, the ws protocol of the LiveQuery server is ws://localhost:8080/. We will allow you to customize the path of ws protocol URL of the LiveQuery server later, currently it is fixed and you can not set path.

Client Setup

We provide JavaScript, Android and iOS LiveQuery Clients for now. Lets use the JavaScript client as an example. In order to use LiveQuery, you need to initialize a Parse.Query object and subscribe to it.

let query = new Parse.Query('People');
query.equalTo('name', 'Mengyan');
let subscription = await query.subscribe();

After you get the subscription, you can use it to receive the updates of the related Parse.Object. For example, if someone creates a People object whose name field is Mengyan, then you can get the People object like this:

subscription.on('create', (people) => {
  console.log(people.get('name')); // This should output Mengyan

After that, if someone updates this People object like changing its score to 100, then you can get the People object like this:

subscription.on('update', (people) => {
  console.log(people.get('score')); // This should output 100

If you are done with the LiveQuery, you can simply unsubscribe the subscription to finish receiving events



We support five types of event:

  • create
  • enter
  • update
  • leave
  • delete

Further Reading

You can check the LiveQuery protocol specification to learn more about each event type.

For more details about the JavaScript LiveQuery Client SDK, check out the open source code and the Live Query section in the JavaScript Guide.

For the iOS LiveQuery Client SDK, check out the open source code.

LiveQuery Protocol

The LiveQuery Protocol is the key to the Parse LiveQuery. The clients and server communicate through WebSocket using this protocol. Clients can follow the protocol to connect to the LiveQuery server, subscribe/unsubscribe a Parse.Query and get updates from the LiveQuery server.

The LiveQuery protocol is a simple protocol that encapsulates messages in JSON strings and runs over a WebSocket connection. You can find the specification in the For the specification, check out the Parse Server wiki page.

LiveQuery Server

Configuring the server

The full configuration of the LiveQuery server should look like this:

  appId: 'myAppId',
  masterKey: 'myMasterKey',
  keyPairs: {
    "restAPIKey": "",
    "javascriptKey": "",
    "clientKey": "",
    "windowsKey": "",
    "masterKey": ""
  serverURL: 'serverURL',
  websocketTimeout: 10 * 1000,
  cacheTimeout: 60 * 600 * 1000,
  logLevel: 'VERBOSE'


  • appId - Required. This string should match the appId in use by your Parse Server. If you deploy the LiveQuery server alongside Parse Server, the LiveQuery server will try to use the same appId.
  • masterKey - Required. This string should match the masterKey in use by your Parse Server. If you deploy the LiveQuery server alongside Parse Server, the LiveQuery server will try to use the same masterKey.
  • serverURL - Required. This string should match the serverURL in use by your Parse Server. If you deploy the LiveQuery server alongside Parse Server, the LiveQuery server will try to use the same serverURL.
  • keyPairs - Optional. A JSON object that serves as a whitelist of keys. It is used for validating clients when they try to connect to the LiveQuery server. Check the following Security section and our protocol specification for details.
  • websocketTimeout - Optional. Number of milliseconds between ping/pong frames. The WebSocket server sends ping/pong frames to the clients to keep the WebSocket alive. This value defines the interval of the ping/pong frame from the server to clients. Defaults to 10 * 1000 ms (10 s).
  • cacheTimeout - Optional. Number in milliseconds. When clients provide the sessionToken to the LiveQuery server, the LiveQuery server will try to fetch its ParseUser’s objectId from parse server and store it in the cache. The value defines the duration of the cache. Check the following Security section and our protocol specification for details. Defaults to 30 * 24 * 60 * 60 * 1000 ms (~30 days).
  • logLevel - Optional. This string defines the log level of the LiveQuery server. We support VERBOSE, INFO, ERROR, NONE. Defaults to INFO.

Basic Architecture

The LiveQuery server is a separate server from Parse Server. As shown in the picture, it mainly contains four components at the runtime.

  • The Publisher. It is responsible for publishing the update of a Parse.Object. When a Parse.Object changes, it will publish a message to the subscribers. The message contains the original Parse.Object and the new Parse.Object. The Publisher is inside the Parse Server at the runtime.
  • The Subscriber. It is responsible for receiving the messages which are sent from the Publisher. After it gets the messages, it can pass them to the LiveQuery component for processing.
  • The WebSocketServer. It is responsible for maintaining the WebSocket connections with clients. It can pass the subscribe/unsubscribe messages from clients to the LiveQuery component. When the LiveQuery component finds a Parse.Object fulfills a Parse.Query, it will get the event message from LiveQuery component and send it to the clients.
  • The LiveQuery. It is the key component of the LiveQuery Server. It maintains the subscription status of clients. After it gets the Parse.Object updates from the Subscriber, it can do the query matching and generate the event messages for clients.


Based on your usage, different components of the LiveQuery server may become the bottleneck. If you app has high throughput, the Publisher/Subscriber may have problems. If you subscribe to many complex Parse.Querys, the LiveQuery component may cause issues. If you need to maintain lots of clients, the WebSocketServer may be the bottleneck. Thus, we highly recommend you to do the load testing for your app if you want to use LiveQuery server in production.

In general, our suggestion to make the LiveQuery server scalable is to separate the Parse Server with the LiveQuery server and add more LiveQuery server instances based on your need. To help you do this, we use Redis to implement a Publisher and Subscriber. If you want to use that, the only thing you need to do is to provide the Redis server address when you initialize the Parse Server and LiveQuery server like this:

let api = new ParseServer({
  liveQuery: {
    classNames: ['Test', 'TestAgain'],
    redisURL: 'redis://localhost:6379'


let httpServer = require('http').createServer(app);
var parseLiveQueryServer = ParseServer.createLiveQueryServer(httpServer,  {
  redisURL: 'redis://localhost:6379'

This redis database should be different from the redis database used for RedisCacheAdapter.

The architecture of the whole LiveQuery system after you use Redis should be like this:

For example, if you use Heroku to deploy your Live Query server, after you setup the Redis with the LiveQuery server, you can simply add more dynos to make your app more scalable like this:

Security with LiveQuery

The LiveQuery server provides two ways to secure your app. The first one is key matching. If you provide key pairs when you initialize the LiveQuery server, when clients try to connect to LiveQuery server, they have to provide the necessary key pairs. Otherwise, the connection will be refused.

The second one is ACL. For what is ACL, you can check the definition here. When clients try to connect and subscribe to the LiveQuery server, they can provide their sessionToken. If you give your Parse.Object proper ACL, when the LiveQuery server get the updates of the Parse.Object, it will try to match Parse.Object’s ACL with the sessionToken of clients or their subscriptions. The event will be only sent to clients whose sessionToken matches the Parse.Object’s ACL.

LiveQuery Clients

The JavaScript LiveQuery client is provided as part of the Parse JavaScript SDK as of version 1.8.0. A separate LiveQuery client library is available for iOS / OS X and Android.

LiveQuery With NGINX

Please refer to the NGINX documentation in order to allow a proper handling of the LiveQuery server that relies on web sockets

Want to contribute to this doc? Edit this section.

OAuth and 3rd Party Authentication

Parse Server supports 3rd party authentication with

  • Apple
  • Facebook
  • Facebook AccountKit
  • Github
  • Google
  • Instagram
  • Janrain Capture
  • Janrain Engage
  • LDAP
  • LinkedIn
  • Meetup
  • Microsoft Graph
  • PhantAuth
  • QQ
  • Spotify
  • Twitter
  • vKontakte
  • WeChat
  • Weibo

Configuration options for these 3rd-party modules is done with the auth option passed to Parse Server:

  auth: {
   twitter: {
     consumer_key: "", // REQUIRED
     consumer_secret: "" // REQUIRED
   facebook: {
     appIds: "FACEBOOK APP ID"

Supported 3rd party authentications

Below, you will find all expected payloads for logging in with a 3rd party auth.

Note, most of them don’t require a server configuration so you can use them directly, without particular server configuration.

Facebook authData

  "facebook": {
    "id": "user's Facebook id number as a string",
    "access_token": "an authorized Facebook access token for the user",
    "expiration_date": "token expiration date of the format: yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"

Learn more about Facebook login.

Facebook AccountKit authData

  "facebookaccountkit": {
    "id": "user's Facebook Account Kit id number as a string",
    "access_token": "an authorized Facebook Account Kit access token for the user",
    // optional, access token via authorization code does not seem to have this in response
    "last_refresh": "time stamp at which token was last refreshed"

The options passed to Parse server:

  auth: {
   facebookaccountkit: {
     // your facebook app id
     appIds: ["id1", "id2"],
     // optional, if you have enabled the 'Require App Secret' setting in your app's dashboards
     appSecret: "App secret from Account Kit setting"

Learn more about Facebook Account Kit.

Two ways to retrieve access token.

Twitter authData

  "twitter": {
    "id": "user's Twitter id number as a string",
    "consumer_key": "your application's consumer key",
    "consumer_secret": "your application's consumer secret",
    "auth_token": "an authorized Twitter token for the user with your application",
    "auth_token_secret": "the secret associated with the auth_token"

The options passed to Parse server:

  auth: {
    twitter: {
     consumer_key: "", // REQUIRED
     consumer_secret: "" // REQUIRED

Learn more about Twitter login.

Anonymous user authData

  "anonymous": {
    "id": "random UUID with lowercase hexadecimal digits"

Apple authData

As of Parse Server 3.5.0 you can use Sign In With Apple.

  "apple": {
    "id": "user",
    "token": "the identity token for the user"

Using Apple Sign In on a iOS device will give you a ASAuthorizationAppleIDCredential.user string for the user identifier, which can be match the sub component of the JWT identity token. Using Apple Sign In through the Apple JS SDK or through the REST service will only give you the JWT identity token (id_token) which you’ll have to decompose to obtain the user identifier in its sub component. As an example you could use something like JSON.parse(atob(token.split(".")[1])).sub.

Configuring parse-server for Sign In with Apple

  auth: {
   apple: {
     client_id: "", // optional (for extra validation), use the Service ID from Apple.

Learn more about Sign In With Apple.

Github authData

  "github": {
    "id": "user's Github id (string)",
    "access_token": "an authorized Github access token for the user"

Google authData

Google oauth supports validation of id_token’s and access_token’s.

  "google": {
    "id": "user's Google id (string)",
    "id_token": "an authorized Google id_token for the user (use when not using access_token)",
    "access_token": "an authorized Google access_token for the user (use when not using id_token)"

Instagram authData

  "instagram": {
    "id": "user's Instagram id (string)",
    "access_token": "an authorized Instagram access token for the user",
    "apiURL": "an api url to make requests. Default:"

Configuring Parse Server for LDAP

The LDAP module can check if a user can authenticate (bind) with the given credentials. Optionally, it can also check if the user is in a certain group. This check is done using a user specified query, called an LDAP Filter. The query should return all groups which the user is a member of. The cn attribute of the query results is compared to groupCn.

To build a query which works with your LDAP server, you can use a LDAP client like Apache Directory Studio.

  "ldap": {
    "url": "ldap://host:port",
    "suffix": "the root of your LDAP tree",
    "dn": "Bind dn.  is replaced with the id suppied in authData",
    "groupCn": "Optional. A group which the user must be a member of.",
    "groupFilter": "Optional. An LDAP filter for finding groups which the user is part of.  is replaced with the id supplied in authData."

If either groupCN or groupFilter is not specified, the group check is not performed.

Example Configuration (this works with the public LDAP test server hosted by Forumsys):

  "ldap": {
    "url": "ldap://",
    "suffix": "dc=example,dc=com",
    "dn": "uid=, dc=example, dc=com",
    "groupCn": "Chemists",
    "groupFilter": "(&(uniqueMember=uid=,dc=example,dc=com)(objectClass=groupOfUniqueNames))"


  "authData": {
    "ldap": {
      "id": "user id",
      "password": "password"

LinkedIn authData

  "linkedin": {
    "id": "user's LinkedIn id (string)",
    "access_token": "an authorized LinkedIn access token for the user",
    "is_mobile_sdk": true|false // set to true if you acquired the token through LinkedIn mobile SDK

Meetup authData

  "meetup": {
    "id": "user's Meetup id (string)",
    "access_token": "an authorized Meetup access token for the user"

Microsoft Graph authData

  "microsoft": {
    "id": "user's microsoft id (string)", // required
    "access_token": "an authorized microsoft graph access token for the user", // required
    "mail": "user's microsoft email (string)"

Learn more about Microsoft Graph Auth Overview.

To get access on behalf of a user.

PhantAuth authData

As of Parse Server 3.7.0 you can use PhantAuth.

  "phantauth": {
    "id": "user's PhantAuth sub (string)",
    "access_token": "an authorized PhantAuth access token for the user",

Learn more about PhantAuth.

QQ authData

  "qq": {
    "id": "user's QQ id (string)",
    "access_token": "an authorized QQ access token for the user"

Spotify authData

  "spotify": {
    "id": "user's spotify id (string)",
    "access_token": "an authorized spotify access token for the user"

vKontakte authData

  "vkontakte": {
    "id": "user's vkontakte id (string)",
    "access_token": "an authorized vkontakte access token for the user"

Configuring parse-server for vKontakte

  auth: {
   vkontakte: {
     appSecret: "", // REQUIRED, your vkontakte application secret
     appIds: "" // REQUIRED, your vkontakte application id

WeChat authData

  "wechat": {
    "id": "user's wechat id (string)",
    "access_token": "an authorized wechat access token for the user"

Weibo authData

  "weibo": {
    "id": "user's weibo id (string)",
    "access_token": "an authorized weibo access token for the user"

Custom authentication

It is possible to leverage the OAuth support with any 3rd party authentication that you bring in.


  auth: {
   my_custom_auth: {
     module: "PATH_TO_MODULE" // OR object,
     option1: "",
     option2: "",

On this module, you need to implement and export those two functions validateAuthData(authData, options) {} and validateAppId(appIds, authData, options) {}.

For more information about custom auth please see the examples:

Want to contribute to this doc? Edit this section.

Compatibility with

There are a few areas where Parse Server does not provide compatibility with the original hosted backend.


Parse Analytics is not supported. We recommend sending analytics to another similar service like Mixpanel or Google Analytics.


By default, only an application ID is needed to authenticate with Parse Server. The base configuration that comes with the one-click deploy options does not require authenticating with any other types of keys. Therefore, specifying client keys on Android or iOS is not needed.

Client Class Creation

Hosted Parse applications can turn off client class creation in their settings. Client Class Creation can be disabled by a configuration flag on parse-server.

Cloud Code

You will likely need to make several changes to your Cloud Code to port it to Parse Server.

No current user

Each Cloud Code request is now handled by the same instance of Parse Server, therefore there is no longer a concept of a “current user” constrained to each Cloud Code request. If your code uses Parse.User.current(), you should use request.user instead. If your Cloud function relies on queries and other operations being performed within the scope of the user making the Cloud Code request, you will need to pass the user’s sessionToken as a parameter to the operation in question.

Consider an messaging app where every Message object is set up with an ACL that only provides read-access to a limited set of users, say the author of the message and the recipient. To get all the messages sent to the current user you may have a Cloud function similar to this one:

// Cloud Code
Parse.Cloud.define('getMessagesForUser', function(request, response) {
  var user = Parse.User.current();

  var query = new Parse.Query('Messages');
  query.equalTo('recipient', user);
    .then(function(messages) {

If this function is ported over to Parse Server without any modifications, you will first notice that your function is failing to run because Parse.User.current() is not recognized. If you replace Parse.User.current() with request.user, the function will run successfully but you may still find that it is not returning any messages at all. That is because query.find() is no longer running within the scope of request.user and therefore it will only return publicly-readable objects.

To make queries and writes as a specific user within Cloud Code, you need to pass the user’s sessionToken as an option. The session token for the authenticated user making the request is available in request.user.getSessionToken().

The ported Cloud function would now look like this:

// Parse Server Cloud Code
Parse.Cloud.define('getMessagesForUser', function(request, response) {
  var user = request.user; // request.user replaces Parse.User.current()
  var token = user.getSessionToken(); // get session token from request.user

  var query = new Parse.Query('Messages');
  query.equalTo('recipient', user);
  query.find({ sessionToken: token }) // pass the session token to find()
    .then(function(messages) {

Master key must be passed explicitly

Parse.Cloud.useMasterKey() is not available in Parse Server Cloud Code. Instead, pass useMasterKey: true as an option to any operation that requires the use of the master key to bypass ACLs and/or CLPs.

Consider you want to write a Cloud function that returns the total count of messages sent by all of your users. Since the objects in our Message class are using ACLs to restrict read access, you will need to use the master key to get the total count:

Parse.Cloud.define('getTotalMessageCount', function(request, response) {

  // Parse.Cloud.useMasterKey() <-- no longer available!

  var query = new Parse.Query('Messages');
  query.count({ useMasterKey: true }) // count() will use the master key to bypass ACLs
    .then(function(count) {

Minimum JavaScript SDK version

Parse Server also uses at least version 1.7.0 of the Parse SDK, which has some breaking changes from the previous versions. If your Cloud Code uses a previous version of the SDK, you may need to update your cloud code. You can look up which version of the JavaScript SDK your Cloud Code is using by running the following command inside your Cloud Code folder:

$ parse jssdk
Current JavaScript SDK version is 1.7.0

Network requests

As with Parse Cloud Code, you can use Parse.Cloud.httpRequest to make network requests on Parse Server. It’s worth noting that in Parse Server you can use any npm module, therefore you may also install the “request” module and use that directly instead.

Cloud Modules

Native Cloud Code modules are not available in Parse Server, so you will need to use a replacement:

  • App Links: Use the applinks-metatag module.

  • Buffer: This is included natively with Node. Remove any require('buffer') calls.

  • Mailgun: Use the official npm module: mailgun-js.

  • Mandrill: Use the official npm module, mandrill-api.

  • Moment: Use the official npm module, moment.

  • Parse Image: We recommend using another image manipulation library, like the imagemagick wrapper module. Alternatively, consider using a cloud-based image manipulation and management platform, such as Cloudinary.

  • SendGrid: Use the official npm module, sendgrid.

  • Stripe: Use the official npm module, stripe.

  • Twilio: Use the official npm module, twilio.

  • Underscore: Use the official npm module, underscore.


Parse has provided a separate Parse Dashboard project which can be used to manage all of your Parse Server applications.

Parse Config

Parse Config is available in Parse Server and can be configured from your Parse Dashboard.

Push Notification Console

You can now send push notifications using Parse Dashboard.

Storing Files

Parse Files in hosted Parse applications were limited to 10 MB. The default storage layer in Parse Server, GridStore, can handle files up to 16 MB. To store larger files, we suggest using Amazon’s Simple Storage Service (S3).

In-App Purchases

iOS in-app purchase verification through Parse is not supported.


There is no background job functionality in Parse Server. If you have scheduled jobs, port them over to a self-hosted solution using a wide variety of open source job queue projects. A popular one is bull. Alternatively, if your jobs are simple, you could use a cron job.

Parse IoT Devices

Push notification support for the Parse IoT SDKs is provided through the Parse Push Notification Service (PPNS). PPNS is a push notification service for Android and IoT devices maintained by Parse. This service will be retired on January 28, 2017. This page documents the PPNS protocol for users that wish to create their own PPNS-compatible server for use with their Parse IoT devices.

Push Notifications Compatibility

Client Push

Hosted Parse applications could disable a security setting in order to allow clients to send push notifications. Parse Server does not allow clients to send push notifications as the masterKey must be used. Use Cloud Code or the REST API to send push notifications.


Schema validation is built in. Retrieving the schema via API is available.

Session Features

Parse Server requires the use of revocable sessions.

Single app aware

Parse Server only supports single app instances. There is ongoing work to make Parse Server multi-app aware. However, if you intend to run many different apps with different datastores, you currently would need to instantiate separate instances.

Social Login

Facebook, Twitter, and Anonymous logins are supported out of the box. Support for additional platforms may be configured via the oauth configuration option.


Cloud Code Webhooks are not supported.

Welcome Emails and Email Verification

Verifying user email addresses and enabling password reset via email requires an email adapter. As part of the parse-server package we provide an adapter for sending email through Mailgun. To use it, sign up for Mailgun, and add this to your initialization code:

var server = ParseServer({
  // Enable email verification
  verifyUserEmails: true,

  // if `verifyUserEmails` is `true` and
  //     if `emailVerifyTokenValidityDuration` is `undefined` then
  //        email verify token never expires
  //     else
  //        email verify token expires after `emailVerifyTokenValidityDuration`
  // `emailVerifyTokenValidityDuration` defaults to `undefined`
  // email verify token below expires in 2 hours (= 2 * 60 * 60 == 7200 seconds)
  emailVerifyTokenValidityDuration: 2 * 60 * 60, // in seconds (2 hours = 7200 seconds)

  // set preventLoginWithUnverifiedEmail to false to allow user to login without verifying their email
  // set preventLoginWithUnverifiedEmail to true to prevent user from login if their email is not verified
  preventLoginWithUnverifiedEmail: false, // defaults to false

  // The public URL of your app.
  // This will appear in the link that is used to verify email addresses and reset passwords.
  // Set the mount path as it is in serverURL
  publicServerURL: '',
  // Your apps name. This will appear in the subject and body of the emails that are sent.
  appName: 'Parse App',
  // The email adapter
  emailAdapter: {
    module: '@parse/simple-mailgun-adapter',
    options: {
      // The address that your emails come from
      fromAddress: '[email protected]',
      // Your domain from
      domain: '',
      // Your API key from
      apiKey: 'key-mykey',

  // account lockout policy setting (OPTIONAL) - defaults to undefined
  // if the account lockout policy is set and there are more than `threshold` number of failed login attempts then the `login` api call returns error code `Parse.Error.OBJECT_NOT_FOUND` with error message `Your account is locked due to multiple failed login attempts. Please try again after <duration> minute(s)`. After `duration` minutes of no login attempts, the application will allow the user to try login again.
  accountLockout: {
    duration: 5, // duration policy setting determines the number of minutes that a locked-out account remains locked out before automatically becoming unlocked. Set it to a value greater than 0 and less than 100000.
    threshold: 3, // threshold policy setting determines the number of failed sign-in attempts that will cause a user account to be locked. Set it to an integer value greater than 0 and less than 1000.
  // optional settings to enforce password policies
  passwordPolicy: {
    // Two optional settings to enforce strong passwords. Either one or both can be specified. 
    // If both are specified, both checks must pass to accept the password
    // 1. a RegExp object or a regex string representing the pattern to enforce 
    validatorPattern: /^(?=.*[a-z])(?=.*[A-Z])(?=.*[0-9])(?=.{8,})/, // enforce password with at least 8 char with at least 1 lower case, 1 upper case and 1 digit
    // 2. a callback function to be invoked to validate the password  
    validatorCallback: (password) => { return validatePassword(password) }, 
    doNotAllowUsername: true, // optional setting to disallow username in passwords
    maxPasswordAge: 90, // optional setting in days for password expiry. Login fails if user does not reset the password within this period after signup/last reset. 
    maxPasswordHistory: 5, // optional setting to prevent reuse of previous n passwords. Maximum value that can be specified is 20. Not specifying it or specifying 0 will not enforce history.
    //optional setting to set a validity duration for password reset links (in seconds)
    resetTokenValidityDuration: 24*60*60, // expire after 24 hours

You can also use other email adapters contributed by the community such as:

Want to contribute to this doc? Edit this section.

Using MongoDB + RocksDB

MongoRocks: What and Why?

Quick Version

Parse has been using MongoDB on RocksDB (MongoRocks) for application data since April, 2015. If you are migrating your Parse app(s) to your own MongoDB infrastructure, we recommend using MongoRocks to take advantage of the increased performance, greater efficiency, and powerful backup capabilities offered by the RocksDB storage engine.

Long Version

In version 3.0, MongoDB introduced the storage engine API to allow users an alternative to the default memory mapped (MMAP) storage engine used by earlier versions of MongoDB. In 2015, Facebook developed a RocksDB implementation of the storage engine API, MongoRocks, which is used by Parse for all customer data. RocksDB is an embeddable persistent key-value store developed by Facebook. It uses a Log-structured Merge Tree (LSM) for storage and is designed for high write throughput and storage efficiency.

Improved Performance and Efficiency

When Parse switched from MMAP to MongoRocks, we discovered the following benefits in our benchmarking:

  • 50x increase in write performance
  • 90% reduction in storage size
  • significantly reduced latency on concurrent workloads due to reduced lock contention

Simple and efficient hot backups

In addition to performance gains, a major advantage of MongoRocks (and RocksDB in general) is very efficient backups that do not require downtime. As detailed in this blog post, RocksDB backups can be taken on a live DB without interrupting service. RocksDB also supports incremental backups, reducing the I/O, network, and storage costs of doing backups and allowing backups to run more frequently. At Parse, we reduced DB infrastructure costs by more than 20% by using MongoRocks, the Strata backup tool, and Amazon S3 in place of MMAP and EBS Snapshots.

Are there any reasons not to use MongoRocks?

Generally speaking, MongoRocks was suitable for running all app workloads at Parse. However, there are some workloads for which LSM are not ideal, and for which better performance may be achieved with other storage engines like MMAP or WiredTiger, such as:

  • Applications with high number of in-place updates or deletes. For example, a very busy work queue or heap.
  • Applications with queries that scan many documents and fit entirely in memory.

It’s difficult to make precise statements about performance for any given workload without data. When in doubt, run your own benchmarks. You can use the flashback toolset to record and replay benchmarks based on live traffic.

Example: Provisioning on Ubuntu and AWS

There are hundreds of ways to build out your infrastructure. For illustration we use an AWS and Ubuntu configuration similar to that used by Parse. You will need a set of AWS access keys and the AWS CLI.

Choosing Hardware

At Parse, we use AWS i2.* (i/o optimized) class instances with ephemeral storage for running MongoRocks. Prior to this, when we used the MMAP storage engine, we used r3.* (memory optimized) instances with EBS PIOPS storage. Why the change?

  • RocksDB is designed to take full advantage of SSD storage. We also experienced large bursts of I/O for some workloads, and provisioning enough IOPS with EBS to support this was expensive. The ephemeral SSDs provided by the i2 class were ideal in our case.
  • MongoRocks uses significantly more CPU than MMAP due to compression. CPU was never a major factor in MMAP.
  • Memory is less critical in MongoRocks. Memory is everything in MMAP.
  • EBS snapshots were critical to our backup strategy with MMAP. With MongoRocks, we had incremental backups with strata, so snapshots were not needed.

If you’re not sure about your workload requirements, we recommend running on the i2 class instances. You can always change this later depending on your production experience.

Below is a general guide for instance sizing based on your existing Parse request traffic:

  • < 100 requests/sec: i2.xlarge
  • 100-500 requests/sec: i2.2xlarge
  • 500+ requests/sec: i2.4xlarge

This guide will use i2.2xlarge as an example.


We recommend you run MongoDB in replica set mode, with at least three nodes for availablity. Each node should run in a separate Availability Zone.

There are dozens of ways to provision hosts in AWS. For reference, we use the AWS CLI below, but the inputs can be easily translated to your tool of choice.

$ SECURITY_GROUP=<my security group ID>
$ US_EAST_1A_SUBNET=<subnet id for us-east-1a>
$ US_EAST_1C_SUBNET=<subnet id for us-east-1c>
$ US_EAST_1D_SUBNET=<subnet id for us-east-1d>
$ aws ec2 run-instances —image-id ami-fce3c696 --instance-type i2.2xlarge --key-name chef3 --block-device-mappings '[{"DeviceName": "/dev/sdb", "VirtualName": "ephemeral0"},{"DeviceName": "/dev/sdc", "VirtualName": "ephemeral1"}]' --security-group-ids ${SECURITY_GROUP} --subnet-id ${US_EAST_1A_SUBNET} --associate-public-ip-address
$ aws ec2 run-instances —image-id ami-fce3c696 --instance-type i2.2xlarge --key-name chef3 --block-device-mappings '[{"DeviceName": "/dev/sdb", "VirtualName": "ephemeral0"},{"DeviceName": "/dev/sdc", "VirtualName": "ephemeral1"}]' --security-group-ids ${SECURITY_GROUP} --subnet-id ${US_EAST_1D_SUBNET} --associate-public-ip-address
$ aws ec2 run-instances —image-id ami-fce3c696 --instance-type i2.2xlarge --key-name chef3 --block-device-mappings '[{"DeviceName": "/dev/sdb", "VirtualName": "ephemeral0"},{"DeviceName": "/dev/sdc", "VirtualName": "ephemeral1"}]' --security-group-ids ${SECURITY_GROUP} --subnet-id ${US_EAST_1D_SUBNET} --associate-public-ip-address

Configuring Storage

The i2.2xlarge and larger instances have multiple ephemeral volumes that should be striped together to produce your data volume. On each host, use mdadm to create the raid volume:

$ sudo apt-get install mdadm
$ sudo mdadm —create /dev/md0 --level=stripe /dev/xvdb /dev/xvdc
$ sudo mkfs -t ext4 /dev/md0
$ sudo mkdir -p /var/lib/mongodb
$ sudo mount /dev/md0 /var/lib/mongodb

Installing MongoRocks

To use MongoRocks, you will need to use a special build of MongoDB that has the storage engine compiled in. At Parse, we run an internally built version, as a pre-packaged version of MongoRocks did not exist when we initially migrated. For new installations, we recommend that you use the Percona builds located here. These builds are 100% feature compatible with the official MongoDB releases, but have been compiled to include the RocksDB storage engine. We have tested the Percona builds with the Parse migration utility and the strata backup software, and verified that both work and are suitable for running Parse apps in production.

Ubuntu installation

$ curl -s -O
$ tar -xf percona-server-mongodb-3.0.8-1.2-r97f91ef-trusty-x86_64-bundle.tar
$ sudo dpkg -i percona-server-mongodb-*


Configuring MongoDB to use the RocksDB storage engine is a matter of setting a few flags in the mongodb.conf file. For complete documentation of all MongoDB configuration options, visit the MongoDB reference page for Configuration File Options.

First, set the storage engine parameter to instruct MongoDB to use the RocksDB storage engine.

  dbPath: /var/lib/mongodb
    enabled: true
  engine: rocksdb

Next, some additional parameters.

# RockDB tuning parameters
# Yield if it's been at least this many milliseconds since we last yielded.
setParameter = internalQueryExecYieldPeriodMS=1000
# Yield after this many "should yield?" checks.
setParameter = internalQueryExecYieldIterations=100000

The adjustments to the internalQueryExecYield* options reduce the frequency that MongoDB yields for writers. Since RocksDB has document level locking, frequent yielding is not necessary.


When starting MongoRocks on a host for the very first time, your storage directory (e.g. /var/lib/mongodb) should be empty. If you have existing data from other storage engines (i.e. MMAP or WiredTiger), you should back up and remove those data files, as the storage formats are not compatible.


Installing strata

Strata is written in go. It requires go 1.4 or later to compile. You can use apt or yum to install go, but these packages are frequently out of date on common distributions. To install a more recent version of go:

$ curl | sudo tar xzf - -C /usr/local
$ sudo mkdir /go
$ sudo chmod 0777 /go

You will need to add go to your PATH environment variable and set GOPATH. On ubuntu, this is as simple as:

$ echo -e 'export PATH="/usr/local/go/bin:${PATH}:" \nexport GOPATH=/go' | sudo tee /etc/profile.d/

After logging in again, you can test that go is installed by running

$ go version
go version go1.5.3 linux/amd64

Installing strata

With go installed, compiling and installing strata is simply a matter of using go install:

$ go get
$ go install

This installs the strata binary to $GOPATH/bin/strata

Configuring backups

At Parse, we deployed strata using a simple distributed cron on all backup nodes. You can find a sample cron and and schedule here in the rocks-strata repository.

At a high level, the three things you want to do regularly when running backups with strata are:

  1. Run strata backup to create the actual backup. This stores the data files and backup metadata in S3, identified by a unique replica ID. Each host must have its own replica ID. For example, if your RS is named “mydata” and your host name is “db1”, you might use “mydata-db1” as your replica ID.
  2. Run strata delete to prune metadata for backups older than a certain date. The retention period that you specify is dependent on your needs.
  3. Run strata gc to delete data files that are orphaned by strata delete.

Displaying backups

You can view backup metadata at any time with strata show backups:

For example, to see all backups for node db1 in replica set mydb, you would run something like this:

$ strata --bucket=mybucket --bucket-prefix=mongo-rocks show backups --replica-id=mydb-db1

ID   data                      num files   size (GB)   incremental files   incremental size   duration
0    2015-09-02 21:11:20 UTC   4           0.000005    4                   0.000005           187.929573ms

More documentation on strata, including how to restore backups, can be found here.

Migrating Existing Data to MongoRocks

Upgrading an existing replica set to MongoRocks

The data files used by MMAP, WiredTiger, and RocksDB are not compatible. In other words, you cannot start MongoRocks using existing MMAP or Wiredtiger data. To change storage formats, you must do one of the following:

  1. Do a logical export and import using mongodump and mongorestore.
  2. Perform an initial sync of data using replication

Option 2 is the easiest, as you can bring a new, empty node online and add it to the replica set without incurring downtime. This approach usually works fine until your data size is in the hundreds of gigabytes. To do so:

  1. Provision a new node configured for RocksDB, following the above steps.
  2. Add the node to your replica set using rs.add()
  3. Wait for initial sync. Note that your data sync must complete before the oplog window expires. Depending on the size of your data, you may need to resize your oplog
Want to contribute to this doc? Edit this section.

Using MongoDB Read Preference

As of ParseServer 2.5, it is possible to set a read preference for Mongo DB queries. For a discussion of Read Preference, limitations and use cases, see the Mongo documentation for Read Preference..

How to set Read Preference in ParseServer

Currently, read preference can only be set in cloud code. For an example see: cloud code examples

Want to contribute to this doc? Edit this section.

Development Guide

Running Parse Server for development

Normally, when you run a standalone Parse Server, the latest release that has been pushed to npm will be used. This is great if you are interested in just running Parse Server, but if you are developing a new feature or fixing a bug you will want to use the latest code on your development environment.

First, you will need to clone this repo if you haven’t done so yet.

git clone

You can then link the parse-server module to the cloned repo and run npm install:

npm link parse-server path/to/cloned/repo
npm install

You can now start Parse Server using npm start:

npm start -- --appId APPLICATION_ID --masterKey MASTER_KEY --serverURL http://localhost:1337/parse

Notable Files

The following is a breakdown of the various files you will find in the Parse Server source code. Click on a filename to learn more about the purpose behind each file.

  • index.js - exposes the ParseServer constructor and mutates Parse.Cloud
  • analytics.js - handle the /events routes
  • Auth.js - Auth object, created to hold config/master/user information for requests
  • batch.js - batch handling implemented for PromiseRouter
  • cache.js - simple caching for the app and user sessions
  • classes.js - handle the /classes routes
  • Config.js - Config object, storage for the application configuration and some router information
  • crypto.js - uses bcrypt for password hashing and comparison
  • DatabaseAdapter.js - Interface for allowing the underlying database to be changed
  • ExportAdapter.js - DatabaseAdapter for MongoDB (default)
  • facebook.js - helper functions for accessing the Graph API
  • files.js - handle the /files routes
  • FilesAdapter.js - Interface for allowing the underlying file storage to be changed
  • FileLoggerAdapter.js - LoggerAdapter for logging info and error messages into local files (default)
  • functions.js - handle the /functions routes
  • GridStoreAdapter.js - FilesAdapter for storing uploaded files in GridStore/MongoDB (default)
  • installations.js - handle the /installations routes
  • LoggerAdapter.js - Interface for allowing the underlying logging transport to be changed
  • middlewares.js - Express middleware used during request processing
  • PromiseRouter.js - PromiseRouter uses promises instead of req/res/next middleware conventions
  • push.js - handle the /push route
  • rest.js - main interface for REST operations
  • RestQuery.js - RestQuery encapsulates everything needed for a ‘find’ operation from REST API format
  • RestWrite.js - RestWrite encapsulates everything needed for ‘create’ and ‘update’ operations from REST API format
  • roles.js - handle the /roles routes
  • Schema.js - Schema handles schema validation, persistence, and modification.
  • sessions.js - handle the /sessions and /logout routes
  • testing-routes.js - used by internal Parse integration tests
  • transform.js - transforms keys/values between Mongo and Rest API formats.
  • triggers.js - cloud code methods for handling database trigger events
  • users.js - handle the /users and /login routes


We really want Parse to be yours, to see it grow and thrive in the open source community. Please see the Contributing to Parse Server notes.

Want to contribute to this doc? Edit this section.