Compare commits

..

28 Commits

Author SHA1 Message Date
Tom Pytleski
e5a449b0f3 Merge branch 'develop' into articles/prism/validation.md
* develop: (79 commits)
  Update security-schemes.md
  Update api-operations.md
  Add Spec Validation (#117)
  Add File Validation (#116)
  Add Editor Configuration (#118)
  Update blocks.md
  Update subpages.md
  Update pages.md
  Update pages.md
  Update pages.md
  Update routing.md
  Update managing-headers-footers.md
  Update working-with-files.md
  Update sign-in.md
  Update edit-profile.md
  Update manage-password.md
  Update deactivate-account.md
  Update changing-your-email.md
  Update create-project.md
  Update sign-out.md
  ...

# Conflicts:
#	articles/prism/validation.md
2018-02-01 14:52:56 -06:00
Robert Wallach
2933c31698 Uploaded by https://stackedit.io/ 2018-01-26 16:37:19 -06:00
Robert Wallach
2947b5c9b0 Uploaded by https://stackedit.io/ 2018-01-26 16:36:36 -06:00
Robert Wallach
580783553d Uploaded by https://stackedit.io/ 2018-01-26 16:34:35 -06:00
Robert Wallach
50447580b5 Uploaded by https://stackedit.io/ 2018-01-26 16:31:32 -06:00
Robert Wallach
f2dc5c6cc5 Uploaded by https://stackedit.io/ 2018-01-26 16:30:31 -06:00
Robert Wallach
c41dba8cf9 Uploaded by https://stackedit.io/ 2018-01-26 16:29:29 -06:00
Robert Wallach
8b88f0b48a Uploaded by https://stackedit.io/ 2018-01-26 16:28:28 -06:00
Robert Wallach
cedfb797b7 Uploaded by https://stackedit.io/ 2018-01-26 16:27:27 -06:00
Robert Wallach
ebad136d66 Uploaded by https://stackedit.io/ 2018-01-26 16:22:45 -06:00
Robert Wallach
fbe4ec8182 Uploaded by https://stackedit.io/ 2018-01-26 16:22:43 -06:00
Robert Wallach
85c77a7206 Uploaded by https://stackedit.io/ 2018-01-26 16:22:42 -06:00
Robert Wallach
5112b4fde7 Uploaded by https://stackedit.io/ 2018-01-26 16:21:40 -06:00
Robert Wallach
d04ec87521 Uploaded by https://stackedit.io/ 2018-01-26 16:20:39 -06:00
Robert Wallach
363e543cf0 Uploaded by https://stackedit.io/ 2018-01-26 16:19:37 -06:00
Robert Wallach
3c49e1700a Uploaded by https://stackedit.io/ 2018-01-26 16:18:36 -06:00
Robert Wallach
cfd22a0e52 Uploaded by https://stackedit.io/ 2018-01-26 16:17:35 -06:00
Robert Wallach
2ab496d1a7 Uploaded by https://stackedit.io/ 2018-01-26 16:16:33 -06:00
Robert Wallach
acf3e3dee4 Uploaded by https://stackedit.io/ 2018-01-26 16:15:32 -06:00
Robert Wallach
e3310db0cc Uploaded by https://stackedit.io/ 2018-01-26 16:14:30 -06:00
Robert Wallach
b7565f52de Uploaded by https://stackedit.io/ 2018-01-26 16:13:29 -06:00
Robert Wallach
ffee7f7acd Uploaded by https://stackedit.io/ 2018-01-26 16:09:25 -06:00
Robert Wallach
7e322986cf Uploaded by https://stackedit.io/ 2018-01-26 16:08:24 -06:00
Robert Wallach
a2879cd706 Uploaded by https://stackedit.io/ 2018-01-26 16:07:22 -06:00
Robert Wallach
07bbf40901 Uploaded by https://stackedit.io/ 2018-01-26 16:06:21 -06:00
Robert Wallach
6bd767f92c Update validation.md 2018-01-26 15:49:56 -06:00
Tom Pytleski
3b66566ebe Update image links
finishes #67
2018-01-25 12:11:08 -06:00
Tom Pytleski
58ae124c82 Prism validation article 2018-01-25 11:48:11 -06:00
36 changed files with 121 additions and 1048 deletions

View File

@@ -1,46 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at robbins@stoplight.io. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@@ -1,51 +1,25 @@
# Environments
<!--(FIXME - SHOW CLICKING BETWEEN DIFFERENT ENVIRONMENTS)-->
![](../../assets/gifs/editor-configuration.gif)
An environment is simply a container for data, represented as a list of key-value pairs (behind the scenes, this is a JSON object). Every Stoplight project has one or more environments associated with it. The data stored in an environment can be used in many places within the Stoplight editor.
The Stoplight editor includes an embedded configuration system that can be used to auto-populate environment information and other variables (hostnames, ports, passwords, etc.) utilized by specifications, scenarios, or collections. To setup the editor configuration, click the icon towards the top right of the editor screen immediately to the left of your username.
Environments, and their default data, are defined in the [Stoplight configuration file](./editor-configuration.md#environments).
- __Do__ create an environment for each development environment associated with the project. For example, `development`, `staging`, and `production`.
- __Don't__ create environments for individual users. Instead, use private variables (below) to customize existing environments.
- __Do__ use environment default data to store shared information like hostnames, ports, passwords, etc.
- __Don't__ use environments to store fixture/seed/temporary data.
<!--(FIXME - SHOW SCREENSHOT OF THE ENVIRONMENTS WINDOW)-->
For more information on environment variables and how they can be used during API testing, please
see [here](../testing/variables-environment.md).
![](../../assets/images/editor-configuration.png)
## Private Variables
Private Variables are _only_ stored locally on your system,
and are never sent to Stoplight or the rest of your team. Private variables
should be reserved for secrets specific to you, such as user-specific passwords,
API keys, and other pieces of sensitive and/or individually specific data.
The left-half of the configuration window is dedicated to "Private Variables", which are variables that are _only_ stored locally on your system and are never sent to Stoplight. Private Variables should be reserved for secrets specific to you, such as user-specific passwords, API keys, and other pieces of sensitive data.
Edit private variables by clicking on the environment button in the top right of the Stoplight editor.
## Resolved Variables
> Since private variables are only stored on your computer, make sure they are
backed up in a secure location.
The right-half of the configuration window displays "Resolved Variables", which is a read-only view of the variables currently exposed to your editor based on your current environment. These variables are stored in the `.stoplight` file included in your project (under "Config" in the File Explorer). To update the default or environment-specific variables stored in Stoplight, click the "Manage Environments" button under the configuration window.
## Resolved Variables
![](../../assets/gifs/editor-configuration2.gif)
Resolved Variables shows a read-only view of the variables that are currently
exposed to your editor. They are based on:
Variables stored in your configuration are in JSON, and can be referenced using the following format:
* The currently selected (active) environment
* The active environment's default variables, as defined in the stoplight configuration file
* The active environment's private variables, as defined by you
```
{$$.env.myVariable}
```
Private variables are merged over default variables that share the same name. This makes it easy
for individual team members to customize and extend environments without affecting the rest of the team.
For more information on updating and customizing environment variables, please
see [here](./editor-configuration.md#environments).
***
**Results**
* [Using Environment Variables in Testing](../testing/variables-environment.md)
* [Configuration with the `.stoplight.yml` File](./editor-configuration.md#environments)
Where `myVariable` is the name of the variable in your configuration.

View File

@@ -1,5 +0,0 @@
# Contact Us
Having trouble finding what you are looking for? Need some additional support? Weve got you covered. Shoot us a message and we will get back to you as soon as possible.
Email us at [support@stoplight.io](mailto:support@stoplight.io) or message us in Intercom.

View File

@@ -5,7 +5,7 @@
## What
Hubs allows you to reference other sources to automatically populate your Hub with content. We call this “powering” a building block. You can power a building block with a file from the current file, a file from the current project, a file from another project, or a file from an external source.
### What Can I Power
### What can I Power
- Pages
- Subpages
@@ -16,14 +16,17 @@ Hubs allows you to reference other sources to automatically populate your Hub wi
### Power a Subpage
1. Select the Hub you wish to modify
2. Select (or create a new) **Subpage**
3. Click on the **gear icon** in the center of the editor toolbar (If new, this window automatically opens)
1. Select **Power this Subpage with an External Data Source**
2. Select (or create a new) Subpage
3. Click on the gear icon in the center of the editor toolbar (If new, this window automatically opens)
1. Select Power this Subpage with an External Data Source
2. Select the data source from the drop-down menu
3. Input the specific data source or select from the drop-down menu
4. Input an inner data source (optional)
4. Click **Confirm**
4. Click Confirm
<!-- theme: info -->
>Try it Out! Power a Subpage with an API Spec from the same project.
<callout> Try it Out! Power a Subpage with an API Spec from the same project </>

View File

@@ -1,37 +1 @@
# API Models
An API model is a blueprint that identifies the requirements and proposed solution for an API development project. A well crafted API model indicates you understand the problem you are trying to solve. The following steps can help you get started with creating an excellent API model.
## Identify Needs
During interactive sessions with stakeholders, outline all the requirements you want your API to meet . Some important questions to ask:
- What goal(s) do we want to achieve with the API?
- Who are the principal users that will consume or interact with the API?
- What activities will the users execute?
- How can we group the activities into steps?
- What are the API methods that are required for these steps? (Place the methods into common resource groups.)
- What resources will the API expose?
- What permissions will we need?
This process may need to be repeated until the API development team is sure that all requirements are covered.
## Build a Common Vocabulary
Vocabulary is used in your API artifacts such as your data fields, resource names, identifiers, and documentation. Creating a standard vocabulary helps you:
- Communicate well with different audiences.
- Establish a standard or style guide that can be adopted by members of the API development team.
- Easily update your documentation.
## Identify Resource Relationships
If your resources contain references to other resources or contain child resources, it is important to understand and define the types of relationships between resources because this will help you to show the link between the resources to the API user making the API more readable. Relationships can be:
- **Dependent**: the resource cannot exist without a parent resource.
- **Independent**: the resource can stand on its own and can reference another resource but does not need another resource to exist or function.
- **Associative**: the resources are independent but the relationship includes or needs further properties to describe it.
## Create a Test Plan
Ensuring that your API meets predefined criteria requires testing. Design test plans early. Feasible tests you can execute include:
- **Functional Testing**: Test the API calls to ensure that it delivers or behaves as expected. For example, you can test to see that the API delivers the expected data based on your model.
- **Mocking** (service simulation): Mocking allows you to execute tests on an API deployment without calling it through a defined API key. Effective API tools will allow you to test your API before implementation.
- **Load Testing**: How will your API perform when deployed on a production server? A load test is one way to simulate the effect of traffic on your servers and observe the performance of your API when it is available to users. Doing a load test will help you understand your API threshold and if the users exceed the threshold.
## Additional Notes
- Create tests that match your use case.
- Discuss security issues during your modelling meetings with your team.
- Ensure the test case is executed to see if the security issues are addressed before deployment. Click here to know more about security schemes and how to secure your API using best practices.

View File

@@ -1,129 +1 @@
# Preventing Duplications and Simplifying OAS Files
## What
- API resources often have or share similar features
- Duplicating features increase the size and complexity of your API
- Reusable definitions make it easier to read, understand, and update your OAS files
- Similar features can be created as reusable definitions and utilized with references
- These definitions can be hosted on the same server as your OAS file or on a different server
- You can reference a definition hosted at any location or server
- Apart from defining reusable definitions, you can also define reusable responses and parameters. You can store your reusable definitions, responses, and parameters in a common library.
<!-- theme: info -->
>Key Terms: A definition is a named schema object. A reference is a path to a declaration within an OAS file.
## How to Reference a Definition
To invoke a reference to a definition, use the **$ref** keyword. For example:
```
#/definitions/Pets ($ref: 'reference to definition')
```
## URL, Remote, and Local References
### General Syntax for URL Reference
- Reference a complete document or resource located on a different server:
```
$ref: 'http://url_resource_path'
```
- Reference a particular section of a resource stored on a different server:
```
$ref: 'http://url_resource_path/document_name.json#section'
```
### General Syntax for Remote Reference
- Reference a complete document or resource located on the same server and location:
```
$ref:'document_name.json'
```
- Reference a particular section of a resource stored on a different server:
```
$ref: 'document_name.json#section'
```
### General Syntax for Local Reference
- Reference a resource found in the root of the current document and the definitions:
```
$ref: '#/definitions/section'
```
## Best Practices
- Only use $ref in locations specifed by the OpenAPI Specification
- Always enclose the value of your local reference in quotes (when using YAML syntax) to ensure it is not treated as a comment. For example:
Good
```
"#/definitions/todo-partial"
```
Bad
```
#/definitions/todo-partial
```
## Examples
- Assuming you have the following schema object named **Todo Partial** and you want to use it inside another definition:
```
{
"title": "Todo Partial",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"completed": {
"type": [
"boolean",
"null"
]
}
},
"required": [
"name",
"completed"
]
}
```
- To refer to that object, you need to add $ref with the corresponding path to your response:
```
{
"title": "Todo Full",
"allOf": [
{
"$ref": "#/definitions/todo-partial" (Reference)
},
{
"type": "object",
"properties": {
"id": {
"type": "integer",
"minimum": 0,
"maximum": 1000000
},
"completed_at": {
"type": [
"string",
"null"
],
"format": "date-time"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"updated_at": {
"type": "string",
"format": "date-time"
},
"user": {
"$ref: "https://exporter.stoplight.io/4568/master/common.oas2.yml#/definitions/user" (Reference)
}
},
"required": [
"id",
"user"
]
}
]
}

View File

@@ -1,61 +1 @@
# Generating Schema
## What
- A schema is metadata that defines the structure, properties, and relationships between data. It also defines the rules that must be adhered to and is usually in the form of a document.
- A structured approach is always recommended for handling and manipulating data.
- The "$ref" keyword is used to reference a schema.
## Why
- A schema definition makes the process of handling data more structured.
- The process of validation and handling user input errors can be imprioved through the use of schemas.
- Schemas encourage the 'single source of truth' (single place to update a definition) concept which, among other things, makes it easier to create and maintain endpoints.
## Best Practices
- It is advisable to always use a schema when you define and implement your API.
- Use schemas to rapidly extract titles, descriptions, and samples for easy API documentation.
## JSON Schema
- JSON (Javascript Object Notation) is a popular, human readable data format that is easy to parse at the server or client side.
- JSON Schema is a standard that contains information about the properties of a JSON object that can be used by an API. It also helps validate the structure of JSON data.
-The properties include: name, title, type etc.
- JSON Schema Specification is divided into three parts:
- **JSON Schema Core**: describes the basic foundation of JSON Schema
- **JSON Schema Validation**: describes methods that define validation constraints. It also describes a set of keywords that can be used to specify validations.
- **JSON Hyper-Schema**: an extension of the JSON Schema Specification that defines hyperlink, images, and hypermedia-related keywords.
## Example
Assume you have an API that requires data provided in the format below:
```
{
pets: [
{id:1 petName: "Blaze", petType: "Canine", age: 2},
{id: 2, petName: "Felicia", petType: "Feline", age: 1}
{id: 2, petName: "Bolt", petType: "Canine", age: 3}
]
}
```
As seen above, each object in the pets array contains the following properties: id, petName, and petType. You can create a schema definition to validate the data and ensure it is in the expected format. The schema definition is outlined below:
```
{
"type":"object",
"properties": {
"pets": {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {"type": "number"},
"petName": {"type": "string", "required": true},
"petType": {"type": "number", "required": true},
"age": {"type": "number"}
}
}
}
}
}
```
### Related Articles
- [JSON Schema](http://json-schema.org/specification.html)

View File

@@ -1,85 +0,0 @@
# Models
## What
A model contains common reusable information that can be referenced in your endpoint definitions or other models in your API design. The resources or endpoints in your API project might contain duplicate structures and response objects. Models reduce this duplication by helping you extract and define common resources making your project easier to maintain.
## Best Practices
### Avoid Cluttered APIs
When you have several endpoints with the same structure, objects, and properties, your API design is untidy. Ensure that you extract reusable artifacts and build them as pragmatic models referenced by other resources within your API project.
### Use a Design First Approach
A design first approach helps create neat and consistent models. It will take longer, but it ensures you built an effective API that is easy to understand and maintain.
## How to Create Models using the Stoplight Modeling Editor
1. Create a new API project (link on how to create a new project goes here)
2. For this example we will be referring to endpoints created for a fictional Pet Store as listed below
- GET /pets (return all pets)
- GET /pets{petid} (return pet with a specified id)
- POST / pets (enter pet information)
- PUT /pets{petid} (update pet with a specified id)
- DELETE /pets{petid} (delete pet with a specified id)
3. The GET /pets method has an array of objects in the Response body with the following properties:
```
{
id, (string),
name, (string),
date _created (string, date format),
date_updated (string, date format),
approved (Boolean),
approved_by (string)
}
```
4. The GET /pets {petid} method duplicates the objects above with the same properties:
```
{
id, (string),
name, (string),
date _created (string, date format),
date_updated (string, date format),
approved (Boolean),
approved_by (string)
}
```
5. The PUT / pets{petid} method duplicates the object above in the Response body with a slight difference in the Request body which has the objects and properties below:
```
{
id, string,
approved boolean,
approved_by string
}
```
<!-- theme: info -->
>Duplication of Objects: If you are required to make changes you would have to update this information in three or more endpoints. Creating a model solves this issue.
6. To create a model click on the + sign next to the Model section.
![](../../assets/images/create-model.png)
7. Enter the details for the key, title, and description fields
![](../../assets/images/editor-details.png)
8. Click on the Editor Tab to create the object and specify the properties you want in the model (You can also copy and paste the JSON Schema from an endpoint into the Raw Schema section of the model)
![](../../assets/images/create-object.png)
![](../../assets/images/model-design.png)
9. Click the Save button to save the changes you have made in the editor
10. Select the GET /pets {petid} (or any endpoint) and navigate to Responses→ Editor
11. To reference the model in your endpoint, click on the object and select $ref as the array item type. Select the model you created from the drop down list
![](../../assets/images/ref-model.png)
12. Select the Viewer section to see the changes you have made
![](../../assets/images/viewer-ref-model.png)
13. All changes made to the properties of the object in the model are now automatically updated in all endpoints that make a reference to the model

View File

@@ -1,25 +1 @@
# Introduction to Objects in API Document Structure
- An OpenAPI document is a document that describes an API and conforms to the OpenAPI Specification. These documents can be in YAML or JSON format.
- Your OpenAPI document can be a single document or a combination of several associated resources which use the $ref syntax to reference the interrelated resources.
## Primitive Data Objects Supported in an OpenAPI Document
- integer (int32 and int64)
- number (float and double)
- string
- byte
- binary
- boolean
- date
- dateTime
- password
## Additional OpenAPI Objects
- **Info Object**: describes the API's title, description (optional), and version metadata. It also supports other details such as contact information, license, and terms of service.
- **Server Object**: identifies the API server and base URL. You can identify a single server or multiple servers and describe them using a description field. All API paths are relative to the URL of the server, for example, "/pets" when fully dilineated, may describe "http://api.hostname.com/pets."
- **Paths Object**: outlines relative paths to individual endpoints within your API and the operations or HTTP methods supported by the endpoints. For example, "GET/pets" can be used to return a list of pets.
- **Parameter Object**: describes a single operation parameter. Operations can have parameters passed through by several means such as: URL path, query string, cookies, and headers. Parameters can be marked as mandatory or optional, you can also describe the format, data type, and indicate its depreciation status.
- **Request body object**: describes body content and media type. It is often used with insert and update operations (POST, PUT, PATCH).
- **Response object**: describes the expected response which can be referenced using the $ref syntax or described within the document. It associates an HTTP response code to the expected response. Examples of HTTP status codes incldue the 200-OK or 404-Not Found codes. [Click here for more information on HTTP Response codes](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes).

View File

@@ -1,38 +1 @@
# Object Inheritance
## What
- A **model** contains common resuable information that can be referenced in your endpoint definitions or other models in your API design.
- When a model derives its properties from another model, the event is called **inheritance**.
- The model which contains the common set of properties and fields becomes a parent to other models, and it is called the **base type**.
- The model which inherits the common set of properties and fields is known as the **derived type**.
- If a base type inherits its properties from another model, the derived type automatically inherits the same properties indicating that inheritance is **transitive**.
- OpenAPI Specification v2 uses the **allOf** syntax to declare inheritance.
- **allOf** obtains a collection of object definitions validated independently but, collectively make up a single object.
## Why
- Inheritance makes your API design more compact. It helps avoid duplication of common properties and fields.
## Best Practices
<!-- theme: info -->
> Avoid using contradictory declarations such as declaring properties with the samer name but dissimilar data type in your base model and derived model.
### Example
```
{
Vehicle:
type: object
properties:
brand:
type: string
Sedan:
allOf: # (This keyword combines the Vehicle model and the Sedan model)
$ref: '#/definitions/Vehicle'
type: object
properties:
isNew:
type: boolean
}
```

View File

@@ -2,35 +2,11 @@
![](../../assets/gifs/file-validation-oas-spec.gif)
## What
OpenAPI validation is the process of verifying the underlying OpenAPI file syntax by making sure it conforms to the [OpenAPI Specification requirements](https://github.com/OAI/OpenAPI-Specification#the-openapi-specification) provided by the [OpenAPI Initiative](https://www.openapis.org/). Stoplight immediately validates any changes done to a spec to ensure they are in the correct format prior to being saved.
<!-- theme: info -->
> Stoplight currently supports the OpenAPI v2 specification. We are working on support for OpenAPI v3, and should have more information in the coming months.
## Why
- Validation promotes data integrity in your data store. For example, a user making updates during a PUT operation might omit data for an important property and overwrite valid data, compromising data integrity.
- Validations indicates that you are engaging in good design practice and your API design is consistent.
## Best Practices
- All requests made to an API should be validated before processing
- Mark all mandatory properties as **Required** to ensure that the value of the property is provided
- Assign a default value to optional properties or parameters with missing values. The server will use the default value when a value is missing or not provided
- You can use the keyword **ReadOnly** to designate a property that can be sent in a response and should not be sent in a request
<!-- theme: info -->
> Using a default value is not recommended when a property or parameter is mandatory
- An API can comsume different media types, the accepted media type can be specified using the **consume** keyword at the operational level or root level to define acceptable media types. For example:
```
consumes:
multipart/form-data
or
consumes:
application/json
```
- An HTTP response containing a user friendly error description is useful when validation fails
***
**Related**

View File

@@ -1,42 +1 @@
# Polymorphic Objects
## What
* Resources in your API are polymorphic. They can be returned as XML or JSON and can have a flexible amount of fields. You can also have requests and responses in your API design that can be depicted by a number of alternative schemas.
* **Polymorphism** is the capacity to present the same interface for differing underlying forms.
* The **discriminator** keyword is used to designate the name of the property that decides which schema definition validates the structure of the model.
## Why
* Polymorphism permits combining and extending model definitions.
## Best Practices
<!-- theme: warning -->
> The discriminator property **must** be a mandatory or required field. When it is used, the value **must** be the name of the schema or any schema that inherits it.
### Example
```yaml
definitions:
Vehicle:
type: object,
discriminator: model
properties:
model:
type: string
color:
type: string
required:
-model
Sedan: # If Vehicle.model is Sedan then use Sedan model for validation.
allOf:
- $ref: '#/definitions/Vehicle'
- type: object
properties:
dateManufactured:
type: date
required:
- dateManufactured
```

View File

@@ -1,62 +0,0 @@
# Prism Introduction
Prism is a performant, dependency free server, built specifically to work with web APIs.
### Features
- Act as a mock server, routing incoming requests to example repsonses, or dynamically generating examples on the fly.
- Act as a transformation layer, manipulating incoming requests and outgoing responses.
- Act as a validation layer, validating incoming requests and outgoing responses.
- Contract test your APIs, given an OAS(Swagger 2) file.
- Log all or a subset of traffic to configurable locations.
- Extend existing APIs with new endpoints or capabilites.
- Act as a system-wide proxy, blocking traffic to particular websites or endpoints.
### Simplicity Redefined
Run it anywhere. It runs on OS X, Windows, and Linux, with no external dependencies. It is a single, self-contained binary file, that you can easily run from your terminal with a single command.
## Getting Started
#### macOS and Linux
```# Install Prism
curl https://raw.githubusercontent.com/stoplightio/prism/master.install.sh | sh
```
#### Windows
Download the appropriate binary from [here](https://github.com/stoplightio/prism/releases). Unzip the binary file, then navigate in your terminal to the folder where you extracted Prism.
### Run a Simple Mock Server
Prism understands OAS(Swagger 2), so let's get started by spinning up a quick mock server for the popular Petstore API. To do this, run the following command in your terminal:
```# os x / linux
prism run --mock --list --spec http://petstore.swagger.io/v2/swagger.json
# windows
path/to/prism.exe run --mock --list --spec http://petstore.swagger.io/v2/swagger.json
```
Here, you are using the "run" command to run a server based on the spec file passed in via the --spec argument. The spec location can be the filepath to a file on your computer, or the URL to a publicly hosted file. The --mock argument tells Prism to mock all incoming requests, instead of forwarding them to the API host described in the spec file. The --list argument is a convenience, and tells Prism to print out the endpoints in the spec on startup.
Prism starts on port 4010 by default - try visiting ```http://localhost:4010/v2/pet/findByStatus``` in your browser. This is one of the endpoints described in the petstore spec you passed in. You'll notice that it returns an error about a required query string parameter "status". This is the automatic request validation at work! The swagger spec specifies that a query string parameter names "status" is required for this endpoint so Prism simulates a 400 response for you. Reload the page with a query string parameter, and you will see the dynamically generated mock response ```http://localhost:4010/v2/pet/findByStatus?status=available```.
Tada! With a single command you have started a validating, dynamically mocking version of the Swagger petstore API.
### Run some Contract Tests
Prism consumes OAS(Swagger 2) files. OAS provides the contract for your API. If your OAS file contains the x-tests extension (generated automatically if you use the Stoplight app to manage your OAS and tests) then you can run tests with Prism.
Check out [this specification](https://goo.gl/jniYmw). If you scroll past all the regular OAS properties, you will notice a ```x-tests``` extension near the bottom of the file. Inside of that property, we have a few test cases defined. This OAS file, along with its tests, is managed in the Stoplight app (we export our API from within the app to produce this file).
#### To Run the Contract Tests
```
# os x / linux
prism test --spec https://goo.gl/jniYmw
# windows
path/to/prism.exe test --spec https://goo.gl/jniYmw
```
You should see some nice output to your terminal detailing the tests and assertions that are run. These tests take our OAS contract and apply it to your API. They act as a sort of sync manager.
<!-- theme: info -->
> If a test fails, it means one of two things - your API is broken or, our OAS contract is out of date / incorrect

View File

@@ -1,4 +1,100 @@
# Setting up a Hosted Prism Contract Server
<!--stackedit_data:
eyJoaXN0b3J5IjpbLTE5Mzc0MDU2MjJdfQ==
-->
Contract servers are a powerful tool in a developer's toolbox. They use your OAS (Swagger 2) and JSON Schema definitions to validate HTTP traffic passing through your API. You can use them to:
1. **Add Contract Tests to an Existing Test Suite**
- Already have a test suite? No problem! Point your tests at the contract test server, and it can annotate responses with the contract test results. Your test suite just has to check for these response headers and fail appropriately.
2. **Monitor Traffic for Anomalies**
- Know _when_ your implementation breaks instantly, and _why_. By sending all your traffic through your contract server, you can flag that 1 in 1000 request anomaly.
3. **Detect Changes in 3rd Party APIs**
- APIs (particularly microservices) usually make calls to other APIs. These dependencies can be disconcerting because you have no control over when they change. With contract servers positioned between you (the consumer) and the external API, you can be alerted when the external API changes.
> **Real World Use Case** - Still not convinced, then head on over to Sendgrid and learn how they used contract servers to power their [integration tests](https://sendgrid.com/blog/stoplight-io-to-test-api-endpoints) for their 7 SDKs.
If you aren't familiar with JSON Schema, we highly recommend you head [here](https://spacetelescope.github.io/understanding-json-schema/) first.
If you are coming from Stoplight Classic (v2), you will notice that there is a little bit more setup involved, but only a couple steps.
## Hosted Contract Server Steps
_Note: We plan to introduce templates to the Stoplight editor file creation process soon. This feature will automate most of the steps below and turn mock server creation into a one-click solution._
_For this article we will validate a service that already exists. It is just a simple API representing a todos list manager that is running at http://todos.stoplight.io. We have created an OAS Specification for it already and you can download it [here](https://exporter.stoplight.io/3351/master/todos.oas2.yml)._
1. Let's create a new project, create a new spec, name it `todos.oas2`, and paste the JSON from the spec above in the code editor.
![](../../assets/gifs/validation-todos-contract-guide.gif)
2. Create a new **Prism instance file** in the project. Name it `todos.contract.prism`.
3. Prism instances are made up of APIs and Rules, you can learn more about them here. Add an API to the Prism instance and connect the `todos.oas2` specification that you created earlier. Also, let's change the `id` to `todos` and set the _Upstream URL_ to _http://todos.stoplight.io_. The Upstream URL is where the contract server will forward incoming requests.
![](../../assets/gifs/validation-todos-prism-api.gif)
4. Next, add a **new rule** that you will setup to power the validation. Rules simply apply scenarios to HTTP traffic passing through the Prism instance.
5. Once you have created a new rule, you need to connect it to the API we added earlier. To do that, click on the `apis` dropdown input and select the previously created API. Connecting the rule to the API you defined earlier makes the OAS file available to scenarios in the rule.
6. Lastly, you need to add a **scenario** that will actually perform the validation. We have an official Stoplight validate scenario [here](https://next.stoplight.io/stoplight/prism?edit=%23%2Fscenarios%2validate), which makes it easy to get started.
1. Add a scenario to the `after` section of your rule.
2. Select `another project` in the first dropdown.
3. Search for `prism`.
1. The file you are looking for within that project is `helpers.prism.yml` and the specific scenario is called `validate`.
This validate scenario should suit most of your needs. It will check the request/response headers, request/response body, request path parameters, and query strings. It will also add response headers to the HTTP request on the way back to the consumer with the results of the validation. For advanced use cases, please send us a [message](). We would love to help out!
![](../../assets/gifs/validation-todos-prism-rule.gif)
7. Save and let's verify that your contract server is working. Click on Home and let's Send a Test Request to `GET /todos`.
_Stoplight's visual editior makes it really easy to debug reqeust/response. If look at the response headers, specifically `Sl-Valid` , it should be `false`. This signifies that (according to your API specification) the request/response isn't valid, aka the contract test failed. You can find out why by inspecting the `Sl-Validation-Messages`. For the purpose of this article, the messages are below, and it looks like user is a required property and it is missing._
```js
// Sl-Validation-Messages
[
{
response: {
message: "The document is not valid. see errors",
error:
"user: user is required\n0: Must validate all the schemas (allOf)\nuser: user is required\n1: Must validate all the schemas (allOf)\nuser: user is required\n2: Must validate all the schemas (allOf)\nuser: user is required\n3: Must validate all the schemas (allOf)\nuser: user is required\n4: Must validate all the schemas (allOf)\nuser: user is required\n5: Must validate all the schemas (allOf)\nuser: user is required\n6: Must validate all the schemas (allOf)\nuser: user is required\n7: Must validate all the schemas (allOf)\nuser: user is required\n8: Must validate all the schemas (allOf)\nuser: user is required\n9: Must validate all the schemas (allOf)\n"
}
}
];
```
![](../../assets/gifs/validation-todos-prism-verify.gif)
8. Let's get rid of this validation error. We don't have control over the API implementation so we have to update our specification.
9. Navigate to the `todos.oas2` file, update the `Todo Full` by deleting the user property, and hit save.
10. Let's resend a request to `GET /todos` and inspect the results. This time `Sl-Valid` is `true`. Awesome, we now have a valid spec and API.
![](../../assets/gifs/validation-todos-prism-done.gif)
# Running your Prism Server Locally
In the previous section, you learned how to create a simple Prism instance that is hosted with Stoplight. It is a powerful, accessible tool that allows your frontend and backend teams to work simultaneously. But, the hosted prism instance might not work behind your company firewall or you might want to run Prism locally on your desktop. Well, you are in luck, Prism is easy to install and run.
## Local Contract Server Steps
1. Install [Prism](https://github.com/stoplightio/prism). Make sure to install Prism Next, the version should be >= `2.0.0-beta.x`.
2. Open up your terminal, log into Stoplight Next with the `prism login` command, and enter your Stoplight Next credentials. Once you are logged in, you will have access to your private and all public projects.
3. Get the export link for the prism mock instance you created above.
![](../../assets/gifs/prism-install.gif)
4. Run `prism serve {export-link} --debug` and open this [link](http://localhost:4010/todos). You can inspect the results by opening the developer console for your browser.
![](../../assets/gifs/validation-todos-prism-local.gif)
# Validating Mock Servers
Validating an existing service is powerful, but what happens if you are still implementing your API and all you have is a mock server? How do you keep the examples valid?
_Note: If you don't have an existing mock server, check out [this](https://next.stoplight.io/stoplight/stoplight-next-docs/blob/master/prism.mock.server.md) article first and then continue reading._
## Steps
1. Repeat step 7 above. That is it. Now you will know when your examples are out of date. Not only will your mock server be accurate, but it will help you catch any errors in examples in your documentation that you provide to users.
# Recap
You now have a fully functional Prism contract server. We have created a public project full of useful Prism resources. We encourage you to explore the other Prism helpers which are located [here](https://next.stoplight.io/stoplight/prism/blob/master/helpers.scenarios.yml). Let us know what you think. We are excited to see what you do!
For the more experienced Prism user, we have set up some advanced Prism instances in the official Stoplight Next [Prism Project](https://next.stoplight.io/stoplight/prism).

View File

@@ -1,30 +0,0 @@
# Assertions
## What is an Assertion?
- An API test consists of a series of steps (these are sometimes HTTP requests) that can be executed collectively or individually.
- An **assertion** is a specification that indicates the expected outcome (response) to a request executed in a test.
- A test is unsuccesful if an assertion fails i.e. the actual outcome is not equal to the expected outcome
- You can create assertions for status codes, response time, reponse content, header values, etc.
- When you execute an assertion, you can determine the type of operation you want to perform with your expected outcomes.
### Comparison Logic Available in Scenarios
- equals
- greater than
- greater than or equals
- less than
- less than or equals
- no equal
- exists
- length equals
- contains
- validate pass
- validate fail
## Why
- Assertions are checked any time a test is executed.
- Assertions are used to detemine the state of a test (pass or fail).
- Assertions are ideal for discovering if an API satisfies stipulated objectives.
## Assertions in Scenarios
- Scenarios in Stoplight are grouped into collections. To create an assertion for a step in Scenarios, you need to create a collection and add your Scenarios to it.

View File

@@ -1,20 +0,0 @@
# Authorization
## What is Authorization?
- **Authentication** is the process of verifying if a user or API consumer can have access to an API.
- **Authorization** defines the resources an authenticated (properly identified) user can access. For example, a user might be authorized to use an endpoint for retrieving results but denied access to the endpoint for updating a data store.
- Authentication strategies can be implemented using basic HTTP authentication or OAuth methods. Authorization can be implemented using roles and claims.
## Authentication Schemes
- **Basic Authentication** is easy to implement and utilizes HTTP headers to validate API consumers. While the credentials might be encoded, they are not encrypted. It is advisable to use this method over HTTPS/SSL.
- **OAuth 1.0** has its foundation in cryptography. Digital signatures are used to authenticate and ensure the data originates from an expected source. It can be used with or without SSL.
- While OAuth 1.0 works primarily with web clients, **OAuth 2.0** works with web and non-web clients. OAuth 2.0 is easy to implement and focuses on bearer tokens. It wokrs with HTTPS/SSL for its security requirement.
- **AWS Signature** is a secutiry protocol that defines authentication information added to AWS requests. It consists of an access key ID and a security access key. Users who generate manual HTTP requests to AWS are required to sign the requests using AWS Signature. [Click here to learn more about AWS Signature](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html)
## Best Practices for Authentication and Authorization when Testing APIs
- Use the same authentication and authorization your users would use during testing. Doing so will help you effectively identify and resolve issues users might face in a live scenario.
- Avoid creating too many test accounts with administrative access to all endpoints and resources during your test phase. It can be exploited, putting your resources and data at risk when the API is avaialble to consumers.
- Use ecryption to store valid IDs and credentials. Ensure test procedures containing valid user IDs or tokens are not displayed as unmasked text in test results or test logs.
- Test your authentication and authorization procedures rigorously by attempting to access secured resources with invalid credentials or session tokens or by atttempting to access a resource denied to an authenticated user.

View File

@@ -1,37 +0,0 @@
# Contract Testing
Scenarios makes it easy to incorporate your OAS / Swagger API specification into your testing process. A few benefits to doing this include:
- **DRY**: Don't re-create test assertions that check for what is already described in your API contract.
- **Governance**: Quickly figure out if the API that was created actually conforms to the API design that was initially agreed upon.
- **Sync Manager**: Your API spec is the single point of truth that describes your API. From it, you might generate documentation, sdks, mock servers, etc. Incorporating your spec into your tests makes sure that your API spec accurately represents your API over time.
<!-- theme: info -->
> If you don't have an API specification yet, you can create one using the Stoplight modeling tool!
## Connecting The Spec
The first thing you need to do to get started with contract testing is connect your API spec to the Scenarios Collection.
1. Create a new (or open an existing) **Scenario file** in the Stoplight editor
2. Select **Swagger/OAS Coverage** in the Scenarios menu to the left
3. Open **Contract Test Settings**
4. Click **+ Add Spec**
5. Select a file from either **This Project** or an **External URL**
6. You are all set! You can now test against an API spec.
## Using the Coverage Report
The coverage report gives you a quick overview of which parts of the connected specs are covered by test assertions in the current Scenario Collection.
You can use the coverage report to quickly stub out a new scenario. Just click the status codes in the table matrix for the steps you want to add to your scenario (in order). Once you've added all the steps, click the "Create Scenario" button in the top right. This will create a scenario with as much setup as possible, using the connected spec for data. It will set your request body, set variables in a sensible way, automatically setup contract tests, and more.
You will likely need to tweak the resulting scenario a little bit, but this process will usually get you most of the way to a complete scenario, with contract test assertions in place!
## Automatic Contract Test Assertion
After linking your spec to the Scenario Collection, contract test assertions will be automatically added for step assertions.
Stoplight will look through your API specification for a operation that matches the step's HTTP method + URL, and use the response status code returned from the API to look up the JSON schema. In the example below, we are testing the 200 response schema in our API spec for the GET /todos/{todoId} endpoint.
When this step is run, the HTTP response structure will be validated against the matched JSON schema from our API spec, and any errors will be added to the test results.

View File

@@ -1,80 +1 @@
# Running a Scenario from Terminal
It is very easy to run scenario collections, or individual scenarios, on your own computer, completely outside of the Scenarios app.
First, install Prism, our command line tool for running scenarios.
*On macOs or Linux:*
```
curl https://raw.githubusercontent.com/stoplightio/prism/2.x/install.sh | sh
```
*On Windows:*
```
Download from https://github.com/stoplightio/prism/tree/2.x
```
After installing, you should be able to run `prism -h` (or `prism.exe -h` in Windows) and see some help text.
The Scenario app has a convenient display that gives you the exact command required to run the collection or scenario that you are viewing, taking into account your current environment variables. If you have the Scenario editor connected to a local file on your computer, it will use the path to that file, otherwise it will use the Scenario SRN (unique identifier).
<!-- theme: warning -->
> Keep in mind that if you are storing your Scenarios on Stoplight's servers, and running them from the command line, you must save them in the Stoplight app before running! This is because Prism will make a call to the Stoplight API to fetch your Scenario JSON, which it will then run from your computer.
See below for a screenshot of the "Run From Terminal" command generator. The command in this box will update live in response to environment, user, and scenario changes.
![](http://i.imgur.com/mqpNanE.png)
## Running Local Files
The `prism conduct` command accepts a filepath. So, if you are working with [local scenario collection](#docTextSection:Ap4Z2B7RgbbLFLjJD) .json files, you can run them with something like:
```bash
prism conduct /path/to/collection.json
```
## Including Specs For Contract Testing
If you are using [contract testing](#docTextSection:tFWniZdshJYLLtKms), you will need to include the filepath to the API specification as part of the command. This is what that looks like:
```bash
prism conduct myOrg/scenarios/myScenarios --spec /path/to/my/swagger.json
```
## Continuous integration
Most CI products (Circle CI, Travis, Jenkins, Codship, etc) generally function in the same way: setup environment, invoke commands to run tests. With Scenarios + Prism, the process is similar. Install Prism, and then configure the CI process to run the appropriate Prism command. We've included instructions for Circle CI below, but these concepts should loosely apply to other CI products.
#### Circle CI
Integrating [Prism](http://stoplight.io/platform/prism) into Circle CI is easy. All you need to do is install Prism and overide the test command.
To install Prism just add a dependency to your circle config.
``` yaml
dependencies:
pre:
- curl https://raw.githubusercontent.com/stoplightio/prism/2.x/install.sh | sh
```
Then override the default test command for circle in your config.
``` yaml
test:
override:
- prism conduct orgId/scenarios/scenarioId
```
When running `prism conduct` you can:
- Use the Scenario SRN, as displayed above.
- Include the Scenario JSON on your CI server, and pass in its absolute file path
- Pass in the absolute URL to the scenario JSON served up via HTTP.
<!-- theme: warning -->
> Don't forget to pass in any required environment values with the --env command line flag (or you can provide the filepath to a json file with your environment variables)!
For convenience, you can find the full command to run your scenario collection or individual scenario in the Stoplight app.

View File

@@ -1,17 +1 @@
Stoplight Scenarios is a powerful (but accessible!) tool that takes the pain out of API testing. It is a standalone product, available on [the web](https://scenarios.stoplight.io), and as a [desktop app](https://download-next.stoplight.io).
We generally recommend the desktop app when possible. It works with local servers, behind firewalls, and exchanges information with tools on your computer like Git or your favorite IDE. You can switch seamlessly between the desktop app and the web app.
We engineered Scenarios from the ground up to be:
- **Powerful** Easily assert, capture, transform, and validate your API Spec (Swagger) against your actual API. And if that isnt enough, Prism has a powerful javascript runtime.
- **Portable** Scenarios are described in plain JSON with a well thought out, robust specification. Use our visual editor to quickly generate and manage this JSON. They can be run from our visual tooling, or completely outside of Stoplight, on your own machines or on your continuous integration server.
- **Flexible** Your APIs, your tests, your way. Scenarios only test what you want them to. They have no opinion about your architecture (Monolithic vs Microservices), company structure (in house vs distributed), development lifecycle (production vs TDD), and your environment (development vs staging vs production).
- **Fast** Time cant be slowed down, and we cant give it back to you. Creating tests should be quick, and waiting for you tests to run shouldnt feel like watching water boil. Scenarios are run concurrently for maximum speed - run hundreds of API requests and test assertions in seconds.
### Editor UI Overview
![](https://cdn.stoplight.io/help-portal/scenarios/scenario-editor-callout.png)

View File

@@ -1,50 +1 @@
# Overview of Testing with HTTP Requests
## What are HTTP methods?
- Hypertext Transfer Protocol (HTTP) is a set of rules that define how information is requested, transmitted, and formatted between a client and a server. HTTP methods (verbs) are used to implement create, read, update, and delete operations on identified resources.
- HTTP methods are classified as safe, non-safe methods, idempotent or non-idempotent methods. Safe methods do not change the state of a resource. Idempotent methods, if executed severally, deliver consistent outcomes. An example of idempotency is outlined below:
### Example
- petAge = 2 # will always return 2 even when the statement is executed over and over again. This statement is idempotent.
- petAge++ # will return different results based on the number of executions. This statement is non-idempotent.
## Methods
- The **GET** method retrieves data and resource representation. It does not change the state of a resource and several executions produce the same results. Thus, it is a safe and idempotent method. When a GET method is successful, it should return a 200 (OK) HTTP status code with the content in the response body and a 404 (NOT FOUND) code if the resource is not available.
- The **POST** method creates new resources. The POST method is not safe and is non-idempotent as the execution of similar POST requests will create two different resources with similar details. It is suggested for non-idempotent resource requests.
- The **PATCH** method makes partial updates to a resource and it is non-idempotent.
- The **PUT** method updates a resource or creates a new resource if it does not exist. It is ideal for the complete update of a resource. The PUT method is idempotent but not safe.
- The **DELETE** method deletes a resource. It is idempotent but not safe.
### Summary
The GET method is the only safe method, as it does not change the state of a resource. GET, PUT, and DELETE methods are idempotent while the POST and PATCH methods are non-idempotent.
## Testing with HTTP Requests
- Testing using HTTP Requests demonstrates whether or not an API will perform as expected when it is deployed to a production server and integrated with existing platforms.
<!-- theme: info -->
> HTTP Request Tests should include checks to the response code, message, and body.
- Apart from verifying the functionality of essential features, HTTP Request Tests **save time and cost**.
## Testing with HTTP Requests: Best Practices
### GET
- Test the GET method to confirm it returns the correct data.
- Test a valid GET request to ensure it returns a 200 (OK) status code or 404 (NOT FOUND) if invalid
- Ensure you test every endpoint fetching data within your API before deployment to a production server.
### POST
- Test the POST method to confirm it creates a resource and returns a 200(OK) code and/or 201(CREATED) code if valid. If invalid, look for a 4xx error status code.
- You can use the GET method to see the outcome of the POST operation.
### PUT & PATCH
- The PUT and PATCH update methods should be tested to ensure that a 200(OK) status code or 204(NO CONTENT) is returned for a successful transaction. If unsuccessful, look for a 4xx error status code.
### DELETE
- Test the DELETE request to certify it returns a 4xx error code if a DELETE operation is executed against an invalid or non-existent resource.
- Test the DELETE request to confirm it returns a 200(OK) for a successful operation.
- Tests for the DELETE method **must not** be done with data residing on a production or live data store.
<!-- theme: info -->
> Testing is a critical stage of the API development life cycle and the type of tests executed will depend on the complexity of the API, time, budget, etc. It is vital to conduct robust tests to reveal any inconsistencies or defects in the API before it is shipped to a production server or interfaced with other platforms.

View File

@@ -1,101 +1 @@
# Using Context Variables
<!--(FIXME - SHOW WRITING VARIABLE TO CONTEXT IN STEP)-->
Context variables allow you to dynamically store and share data between steps in a scenario. Contrary to environment variables, context variables are _not_ saved once a test has completed. Therefore, context variables are only suitable for temporary data.
Context variables are scoped to the scenario, _not_ the collection. This means that two scenarios can both read/write the same context variable `myVar`, and not conflict with each other. Environment variables, on the other hand, are shared amongst all scenarios, and are scoped to the collection.
At the start of a test run, the context object is empty. Good examples of data to store in a context would be things like ID's, usernames, and randomly generated tokens.
## Use Case
Context variables make it possible to chain related steps together. For example, say we have the following set of actions to perform:
1. Create User, `POST /users`. Returns a new user object, with an `id` property.
2. Get User, `GET /users/{$.ctx.userId}`.
3. Delete User, `DELETE /users/{$.ctx.userId}`.
Somehow we need to use the `id` property for the user created in step #1 to build the request in steps #2 and #3. This is a great case for context variables, since the data is temporary (the new user's id changes every test run, and is only needed in this single scenario).
To accomplish this, we would capture/set the `$.ctx.userId` property to `output.id` in step #1, and then use that variable to create the request urls in #2 and #3 (shown above).
## Setting Context Variables
### With Captures
<!--(FIXME - SHOW USING THE CAPTURE MENU IN A SCENARIO STEP)-->
The capture UI in the step editor makes it easy to set `$.ctx` values. You can use values from the step output or input, including headers, response bodies, etc.
<!-- theme: info -->
> Multiple captures can be applied to the same step, to set multiple `$.ctx` values.
### With Scripting
<!--(FIXME - SHOW SCREENSHOT OF SCRIPT IN STEP)-->
Scripting allows you to use more complicated logic in a scenario step. Scripts
are executed either before or after a step request finishes. Scripts are plain
Javascript and give you direct access to the scenario context through the global
`$.ctx` object.
For example, if we wanted to set the `userId` property as described in the use case above, we would add an after script to the first step with the code:
```javascript
// store the step output body's 'id' property in the context, for use in subsequent steps
$.ctx.set('userId', output.body.get('id'));
```
Where the `$.ctx.set(x, y)` function adds the data referenced in the second
argument (`y`) to the context under the string value of the first argument
(`x`).
Here is another example that just sets `myVariable` to the hardcoded value `123`:
```javascript
$.ctx.set('myVariable', 123);
```
## Using Context Variables
<!--(FIXME - SHOW USING A CONTEXT VARIABLE IN A SCENARIO STEP)-->
To use a context variable in a scenario, use the following syntax:
```
{$.ctx.myVariable}
```
Where:
* `{...}` - Braces signify that this is a variable.
* `$` - The "single dollar sign" syntax is a reference to the current scenario's
runtime scope. Again, context variables are scoped to the individual scenario, not the global collection!
* `ctx` - This is the actual context object onto which values are stored and retrieved.
* `myVariable` - This is the name of the variable being referenced within the context.
When the scenario or step is run, all context variables will
automatically be populated based on the contents of the `$.ctx` at
runtime.
### In Scripts
Similar to the example above, when referencing a context variable in a step
script, use the following syntax:
```javascript
$.ctx.get('myVariable');
```
Where the braces (`{}`) are absent, and we are using the `get()` method for
retrieving the context variable under the `myVariable` key.
***
**Related**
* [Environment Overview](../editor/environments.md)
* [Environment Configuration](../editor/editor-configuration.md)
* [Variables Overview](./variables-overview.md)
* [Context Variables](./variables-context.md)

View File

@@ -1,93 +1 @@
# Using Environment Variables
<!--(FIXME - SHOW CLICKING THROUGH ENVIRONMENTS IN UI)-->
> If you have not already done so, we recommend reviewing the
[Environments](../editor/environments.md) article before continuing.
Environment variables in Stoplight allow you to dynamically retrieve information
in a scenario from the active environment. This makes it possible to
switch between different environments with ease, having variables automatically
populate based on the current environment.
## Setting Environment Variables
### With the Editor Configuration
For information on managing project environments, please review the [environment](../editor/environments.md) article.
### With Captures
Captures make it easy to "capture" values from your step request or result, and save them back to an environment variable for later use. Simply switch to the `captures` tab in the scenario step, and choose $$.env as the target property.
Say you have a scenario step that sends an HTTP request to authenticate a new user. The response from that request includes an apiKey that you want to use for other requests. You can easily save that apiKey to an environment variable, for later re-use, by adding a capture in the form `$$.env.apiKey = output.body.apiKey`. After running the step, check your current environment variables and note the newly added apiKey!
> Environment variables set via captures are only added to the user's private
variables, and are not sent to Stoplight. See the [Environment
section](../editor/environments.md) for more information.
### With Scripting
Scripting allows you to use more complicated logic in a scenario step. Scripts
are executed either before or after a step finishes. Scripts are plain
Javascript and give you direct access to the scenario environment through a
global `$$.env` object.
To add variables to the environment, use the following syntax:
```javascript
// store the step output (response) body's 'username' property in the environment
$$.env.set('username', output.body.get('username'));
```
Where the `$$.env.set(x, y)` function adds the data referenced in the second
argument (`y`) to the environment under the string value of the first argument
(`x`).
> Environment variables set via script are only added to the user's private
variables, and are not sent to Stoplight. See the [Environment
section](../editor/environments.md) for more information.
## Using Environment Variables
<!--(FIXME - SHOW USING A VARIABLE IN A SCENARIO STEP)-->
Use an environment variable in a scenario with the following syntax:
```
{$$.env.myVariable}
```
Where:
* `{...}` - Braces signify that this is a variable.
* `$$` - The "double dollar sign" syntax is a reference to the global
scope.
* `env` - The `env` property holds the active environment's data.
* `myVariable` - This is the variable being referenced, which comes from the
active environment's resolved variables. Substitute your own variable name when using
this in your scenarios.
When the scenario or step is run, any environment variables will
automatically be populated based on the editor's active environment.
### In Scripts
Similar to the example above, when referencing an environment variable in a step
script, use the following syntax:
```javascript
$$.env.get('myVariable');
```
Where the braces (`{}`) are absent, and we are using the `get()` method for
retrieving the environment variable under the `myVariable` key.
***
**Related**
* [Environment Overview](../editor/environments.md)
* [Environment Configuration](../editor/editor-configuration.md)
* [Variables Overview](./variables-overview.md)
* [Context Variables](./variables-context.md)

View File

@@ -1,30 +1 @@
# Variables Overview
Variables in Stoplight provide a powerful and intuitive way to dynamically set,
update, and retrieve information at any step in a Scenario.
Variables are stored in an [environment](../editor/environments.md). You can define one or more environments,
each with their own variables. This makes it easy to quickly swap out sets of
variables during testing.
There are a variety of circumstances where you might consider using variables instead of hardcoding the value, for example:
- __hostnames__: Instead of hard-coding a particular server location, use a variable
so that the host can quickly be changed to test multiple server locations (development versus production, for example).
- __api keys__
- __usernames and passwords__
- __ports__
- __path parameters__: Instead of defining a request `GET /users/123`, you can define a request `GET /users/{$$.ctx.userId}`.
There are two scopes for variables, which affect how and when they can be used.
* __Environment Variables__ - Environment variables are scoped to the project, and are shared amongst all steps in your test run. They are persisted between test runs, and are great for data that does not change often (hostnames, ports, etc). See [here](./variables-environment.md) for more information on how to use environment variables.
* __Context Variables__ - Context variables are scoped to the scenario, and are reset on every test run. They are useful to persist test and application state between scenario steps. Context variables are great for temporary information that is only relevant to the current test run. For example, you might store a newly created `userId` returned in your first step, to be used in the second step. This `userId` changes on every test run, which makes it a good context variable candidate. See [here](./variables-context.md) for more information on how to use context variables.
***
**Related**
* [Environment Variables](./variables-environment.md)
* [Context Variables](./variables-context.md)

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB