• My W3C Account

Community & Business Groups

  • Home  / 
  • Second Screen Commun...  / 

Presentation API demos

In the spirit of experimentation, the Second Screen Presentation Community Group has been working on a series of proof-of-concept demos for the Presentation API, using custom browser builds and/or existing plug-ins to implement or emulate the Presentation API, when available, or falling back to opening content in a separate browser window otherwise.

Except where otherwise noted, the source code of these demos is available on GitHub under the  Second Screen Presentation Community Group organization .

Video sharing demo

The  video sharing demo  lets one present a video on a second screen.

Note that the demo does not present the video directly on the second screen. It rather presents an HTML video player and passes the URL of the video to play to that player afterwards. In particular, the video player is controlled through messages exchanged between the controlling and the presenting sides.

The demo supports second screens attached through a video link or through some wireless equivalent provided that the provided custom build of Chromium is used.

<video> sharing demo

As opposed to the first demo, the <video> sharing demo presents the video directly to a second screen. Control of the presented video is done from the controlling side using the usual HTMLMediaElement methods such as play(), pause() or fastSeek().

The demo supports second screens attached through a video link or through some wireless equivalent provided that the provided custom build of Chromium is used. The demo also supports Chromecast devices provided that the Google Cast extension is available.

HTML Slidy remote

The HTML Slidy remote demo  takes the URL of a slide show made with HTML Slidy as input and presents that slide show on a second screen, turning the first screen into a slide show remote.

FAMIUM Presentation API demos

The Fraunhofer FOKUS’ Competence Center Future Applications and Media (FAME) offers different implementations of the Presentation API as part of FAMIUM, an end-to-end prototype implementation for early technology evaluation and interoperability testing introduced by FAME.

Implementations support virtual displays that can be opened in any Web browser, Chromecast devices, turning Android and desktop devices into second screens, and features a prototype Web browser that implements the Presentation API and supports additional protocols such as WiDi, Miracast, and Network Service Discovery (mDNS/DNS-SD). The source code of these different implementations is not yet available.

One Response to Presentation API demos

Pingback: Chrome 48 Updates And News | Ido Green

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Before you comment here, note that this forum is moderated and your IP address is sent to Akismet , the plugin we use to mitigate spam comments.

Search

  • December 2019
  • September 2016
  • December 2014
  • August 2014
  • December 2013
  • Announcements
  • Uncategorized

Footer Navigation

  • Participate

Contact W3C

  • Help and FAQ
  • Sponsor / Donate
  • Feedback ( archive )

W3C Updates

Remember me

Log in or Request an account

W3C

Presentation API

W3C Editor's Draft 23 August 2024

Copyright © 2024 World Wide Web Consortium . W3C ® liability , trademark and permissive document license rules apply.

This specification defines an API to enable Web content to access presentation displays and use them for presenting Web content.

Status of This Document

This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

This document was published by the Second Screen Working Group as an Editor's Draft.

Since publication as Candidate Recommendation on 01 June 2017 , the Working Group updated the steps to construct a PresentationRequest to ignore a URL with an unsupported scheme, placed further restrictions on how receiving browsing contexts are allowed to navigate themselves, and dropped the definition of the BinaryType enum in favor of the one defined in the HTML specification. Other interfaces defined in this document did not change other than to adjust to WebIDL updates. Various clarifications and editorial updates were also made. See the list of changes for details.

No feature has been identified as being at risk .

The Second Screen Working Group will refine the test suite for the Presentation API during the Candidate Recommendation period and update the preliminary implementation report . For this specification to advance to Proposed Recommendation, two independent, interoperable implementations of each feature must be demonstrated, as detailed in the Candidate Recommendation exit criteria section.

Publication as an Editor's Draft does not imply endorsement by W3C and its Members.

This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy . W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy .

This document is governed by the 03 November 2023 W3C Process Document .

1. Introduction

This section is non-normative.

The Presentation API aims to make presentation displays such as projectors, attached monitors, and network-connected TVs available to the Web. It takes into account displays that are attached using wired (HDMI, DVI, or similar) and wireless technologies (Miracast, Chromecast, DLNA, AirPlay, or similar).

Devices with limited screen size lack the ability to show Web content to a larger audience: a group of colleagues in a conference room, or friends and family at home, for example. Web content shown on a larger presentation display has greater perceived quality, legibility, and impact.

At its core, the Presentation API enables a controller page to show a presentation page on a presentation display and exchange messages with it. How the presentation page is transmitted to the display and how messages are exchanged between it and the controller page are left to the implementation; this allows the use of a wide variety of display technologies.

For example, if the presentation display is connected by HDMI or Miracast, which only allow audio and video to be transmitted, the user agent (UA) hosting the controller will also render the presentation . It then uses the operating system to send the resulting graphical and audio output to the presentation display. We refer to this situation as the 1-UA mode implementation of the Presentation API. The only requirements are that the user agent is able to send graphics and audio from rendering the presentation to the presentation display, and exchange messages internally between the controller and presentation pages.

If the presentation display is able to render HTML natively and communicate with the controller via a network, the user agent hosting the controller does not need to render the presentation . Instead, the user agent acts as a proxy that requests the presentation display to load and render the presentation page itself. Message exchange is done over a network connection between the user agent and the presentation display. We refer to this situation as the 2-UA mode implementation of the Presentation API.

The Presentation API is intended to be used with user agents that attach to presentation displays in 1-UA mode , 2-UA mode , and possibly other means not listed above. To improve interoperability between user agents and presentation displays, standardization of network communication between browsers and displays is being considered in the Second Screen Community Group .

2. Use cases and requirements

Use cases and requirements are captured in a separate Presentation API Use Cases and Requirements document.

3. Conformance

As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.

The key words MAY , MUST , MUST NOT , OPTIONAL , SHOULD , and SHOULD NOT in this document are to be interpreted as described in BCP 14 [ RFC2119 ] [ RFC8174 ] when, and only when, they appear in all capitals, as shown here.

Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and terminate these steps") are to be interpreted with the meaning of the key word (" MUST ", " SHOULD ", " MAY ", etc.) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps may be implemented in any manner, so long as the result is equivalent. (In particular, the algorithms defined in this specification are intended to be easy to follow, and not intended to be performant.)

3.1 Conformance classes

This specification describes the conformance criteria for two classes of user agents .

Web browsers that conform to the specifications of a controlling user agent must be able to start and control presentations by providing a controlling browsing context as described in this specification. This context implements the Presentation , PresentationAvailability , PresentationConnection , PresentationConnectionAvailableEvent , PresentationConnectionCloseEvent , and PresentationRequest interfaces.

Web browsers that conform to the specifications of a receiving user agent must be able to render presentations by providing a receiving browsing context as described in this specification. This context implements the Presentation , PresentationConnection , PresentationConnectionAvailableEvent , PresentationConnectionCloseEvent , PresentationConnectionList , and PresentationReceiver interfaces.

One user agent may act both as a controlling user agent and as a receiving user agent , if it provides both browsing contexts and implements all of their required interfaces. This can happen when the same user agent is able to host the controlling browsing context and the receiving browsing context for a presentation, as in the 1-UA mode implementation of the API.

Conformance requirements phrased against a user agent apply either to a controlling user agent , a receiving user agent or to both classes, depending on the context.

4. Terminology

The terms JavaScript realm and current realm are used as defined in [ ECMASCRIPT ]. The terms resolved and rejected in the context of Promise objects are used as defined in [ ECMASCRIPT ].

The terms Accept-Language and HTTP authentication are used as defined in [ RFC9110 ].

The term cookie store is used as defined in [ RFC6265 ].

The term UUID is used as defined in [ RFC4122 ].

The term DIAL is used as defined in [ DIAL ].

The term reload a document refers to steps run when the reload () method gets called in [ HTML ].

The term local storage area refers to the storage areas exposed by the localStorage attribute, and the term session storage area refers to the storage areas exposed by the sessionStorage attribute in [ HTML ].

This specification references terms exported by other specifications, see B.2 Terms defined by reference . It also references the following internal concepts from other specifications:

  • parse a url , defined in HTML [ HTML ]
  • creating a new browsing context , defined in HTML [ HTML ]
  • session history , defined in HTML [ HTML ]
  • allowed to navigate , defined in HTML [ HTML ]
  • navigating to a fragment identifier , defined in HTML [ HTML ]
  • unload a document , defined in HTML [ HTML ]
  • database , defined in Indexed Database API [ INDEXEDDB ]

5. Examples

This section shows example codes that highlight the usage of main features of the Presentation API. In these examples, controller.html implements the controller and presentation.html implements the presentation. Both pages are served from the domain https://example.org ( https://example.org/controller.html and https://example.org/presentation.html ). These examples assume that the controlling page is managing one presentation at a time. Please refer to the comments in the code examples for further details.

5.1 Monitoring availability of presentation displays

This code renders a button that is visible when there is at least one compatible presentation display that can present https://example.com/presentation.html or https://example.net/alternate.html .

Monitoring of display availability is done by first creating a PresentationRequest with the URLs you want to present, then calling getAvailability to obtain a PresentationAvailability object whose change event will fire when presentation availability changes state.

5.2 Starting a new presentation

When the user clicks presentBtn , this code requests presentation of one of the URLs in the PresentationRequest . When start is called, the browser typically shows a dialog that allows the user to select one of the compatible displays that are available. The first URL in the PresentationRequest that is compatible with the chosen display will be presented on that display.

The start method resolves with a PresentationConnection object that is used to track the state of the presentation, and exchange messages with the presentation page once it's loaded on the display.

5.3 Reconnecting to an existing presentation

The presentation continues to run even after the original page that started the presentation closes its PresentationConnection , navigates, or is closed. Another page can use the id on the PresentationConnection to reconnect to an existing presentation and resume control of it. This is only guaranteed to work from the same browser that started the presentation.

5.4 Starting a presentation by the controlling user agent

Some browsers have a way for users to start a presentation without interacting directly with the controlling page. Controlling pages can opt into this behavior by setting the defaultRequest property on navigator.presentation , and listening for a connectionavailable event that is fired when a presentation is started this way. The PresentationConnection passed with the event behaves the same as if the page had called start .

5.5 Monitoring the connection state and exchanging data

Once a presentation has started, the returned PresentationConnection is used to monitor its state and exchange messages with it. Typically the user will be given the choice to disconnect from or terminate the presentation from the controlling page.

Since the the controlling page may connect to and disconnect from multiple presentations during its lifetime, it's helpful to keep track of the current PresentationConnection and its state. Messages can only be sent and received on connections in a connected state.

5.6 Listening for incoming presentation connections

This code runs on the presented page ( https://example.org/presentation.html ). Presentations may be connected to from multiple controlling pages, so it's important that the presented page listen for incoming connections on the connectionList object.

5.7 Passing locale information with a message

5.8 creating a second presentation from the same controlling page.

It's possible for a controlling page to start and control two independent presentations on two different presentation displays. This code shows how a second presentation can be added to the first one in the examples above.

6.1 Common idioms

A presentation display refers to a graphical and/or audio output device available to the user agent via an implementation specific connection technology.

A presentation connection is an object relating a controlling browsing context to its receiving browsing context and enables two-way-messaging between them. Each presentation connection has a presentation connection state , a unique presentation identifier to distinguish it from other presentations , and a presentation URL that is a URL used to create or reconnect to the presentation . A valid presentation identifier consists of alphanumeric ASCII characters only and is at least 16 characters long.

Some presentation displays may only be able to display a subset of Web content because of functional, security or hardware limitations. Examples are set-top boxes, smart TVs, or networked speakers capable of rendering only audio. We say that such a display is an available presentation display for a presentation URL if the controlling user agent can reasonably guarantee that presentation of the URL on that display will succeed.

A controlling browsing context (or controller for short) is a browsing context that has connected to a presentation by calling start or reconnect , or received a presentation connection via a connectionavailable event. In algorithms for PresentationRequest , the controlling browsing context is the browsing context whose JavaScript realm was used to construct the PresentationRequest .

The receiving browsing context (or presentation for short) is the browsing context responsible for rendering to a presentation display . A receiving browsing context can reside in the same user agent as the controlling browsing context or a different one. A receiving browsing context is created by following the steps to create a receiving browsing context .

In a procedure, the destination browsing context is the receiving browsing context when the procedure is initiated at the controlling browsing context , or the controlling browsing context if it is initiated at the receiving browsing context .

The set of controlled presentations , initially empty, contains the presentation connections created by the controlling browsing contexts for the controlling user agent (or a specific user profile within that user agent). The set of controlled presentations is represented by a list of PresentationConnection objects that represent the underlying presentation connections . Several PresentationConnection objects may share the same presentation URL and presentation identifier in that set, but there can be only one PresentationConnection with a specific presentation URL and presentation identifier for a given controlling browsing context .

The set of presentation controllers , initially empty, contains the presentation connections created by a receiving browsing context for the receiving user agent . The set of presentation controllers is represented by a list of PresentationConnection objects that represent the underlying presentation connections . All presentation connections in this set share the same presentation URL and presentation identifier .

In a receiving browsing context , the presentation controllers monitor , initially set to null , exposes the current set of presentation controllers to the receiving application. The presentation controllers monitor is represented by a PresentationConnectionList .

In a receiving browsing context , the presentation controllers promise , which is initially set to null , provides the presentation controllers monitor once the initial presentation connection is established. The presentation controllers promise is represented by a Promise that resolves with the presentation controllers monitor .

In a controlling browsing context , the default presentation request , which is initially set to null , represents the request to use when the user wishes to initiate a presentation connection from the browser chrome.

The task source for the tasks mentioned in this specification is the presentation task source .

Unless otherwise specified, the JavaScript realm for script objects constructed by algorithm steps is the current realm .

6.2 Interface Presentation

The presentation attribute is used to retrieve an instance of the Presentation interface. It MUST return the Presentation instance.

6.2.1 Controlling user agent

Controlling user agents MUST implement the following partial interface:

The defaultRequest attribute MUST return the default presentation request if any, null otherwise. On setting, the default presentation request MUST be set to the new value.

The controlling user agent SHOULD initiate presentation using the default presentation request only when the user has expressed an intention to do so via a user gesture, for example by clicking a button in the browser chrome.

To initiate presentation using the default presentation request , the controlling user agent MUST follow the steps to start a presentation from a default presentation request .

Support for initiating a presentation using the default presentation request is OPTIONAL .

6.2.2 Receiving user agent

Receiving user agents MUST implement the following partial interface:

The receiver attribute MUST return the PresentationReceiver instance associated with the receiving browsing context and created by the receiving user agent when the receiving browsing context is created . In any other browsing context (including child navigables of the receiving browsing context ) it MUST return null .

Web developers can use navigator.presentation.receiver to detect when a document is loaded as a presentation.

6.3 Interface PresentationRequest

A PresentationRequest object is associated with a request to initiate or reconnect to a presentation made by a controlling browsing context . The PresentationRequest object MUST be implemented in a controlling browsing context provided by a controlling user agent .

When a PresentationRequest is constructed, the given urls MUST be used as the list of presentation request URLs which are each a possible presentation URL for the PresentationRequest instance.

6.3.1 Constructing a PresentationRequest

When the PresentationRequest constructor is called, the controlling user agent MUST run these steps:

  • If the document object's active sandboxing flag set has the sandboxed presentation browsing context flag set, then throw a SecurityError and abort these steps.
  • If urls is an empty sequence, then throw a NotSupportedError and abort all remaining steps.
  • If a single url was provided, let urls be a one item array containing url .
  • Let presentationUrls be an empty list of URLs.
  • Let A be an absolute URL that is the result of parsing U relative to the API base URL specified by the current settings object .
  • If the parse a URL algorithm failed, then throw a SyntaxError exception and abort all remaining steps.
  • If A 's scheme is supported by the controlling user agent , add A to presentationUrls .
  • If presentationUrls is an empty list, then throw a NotSupportedError and abort all remaining steps.
  • If any member of presentationUrls is not a potentially trustworthy URL , then throw a SecurityError and abort these steps.
  • Construct a new PresentationRequest object with presentationUrls as its presentation request URLs and return it.

6.3.2 Selecting a presentation display

When the start method is called, the user agent MUST run the following steps to select a presentation display .

  • If the document's active window does not have transient activation , return a Promise rejected with an InvalidAccessError exception and abort these steps.
  • Let topContext be the top-level browsing context of the controlling browsing context .
  • If there is already an unsettled Promise from a previous call to start in topContext or any browsing context in the descendant navigables of topContext , return a new Promise rejected with an OperationError exception and abort all remaining steps.
  • Let P be a new Promise .
  • Return P , but continue running these steps in parallel .
  • If the user agent is not monitoring the list of available presentation displays , run the steps to monitor the list of available presentation displays in parallel .
  • Let presentationUrls be the presentation request URLs of presentationRequest .
  • Request user permission for the use of a presentation display and selection of one presentation display.
  • The list of available presentation displays is empty and will remain so before the request for user permission is completed.
  • No member in the list of available presentation displays is an available presentation display for any member of presentationUrls .
  • Reject P with a NotFoundError exception.
  • Abort all remaining steps.
  • If the user denies permission to use a display, reject P with an NotAllowedError exception, and abort all remaining steps.
  • Otherwise, the user grants permission to use a display; let D be that display.
  • Run the steps to start a presentation connection with presentationRequest , D , and P .

6.3.3 Starting a presentation from a default presentation request

When the user expresses an intent to start presentation of a document on a presentation display using the browser chrome (via a dedicated button, user gesture, or other signal), that user agent MUST run the following steps to start a presentation from a default presentation request . If no default presentation request is set on the document, these steps MUST not be run.

  • If there is no presentation request URL for presentationRequest for which D is an available presentation display , then abort these steps.
  • Run the steps to start a presentation connection with presentationRequest and D .

6.3.4 Starting a presentation connection

When the user agent is to start a presentation connection , it MUST run the following steps:

  • Let I be a new valid presentation identifier unique among all presentation identifiers for known presentation connections in the set of controlled presentations . To avoid fingerprinting, implementations SHOULD set the presentation identifier to a UUID generated by following forms 4.4 or 4.5 of [ rfc4122 ].
  • Create a new PresentationConnection S .
  • Set the presentation identifier of S to I .
  • Set the presentation URL for S to the first presentationUrl in presentationUrls for which there exists an entry (presentationUrl, D) in the list of available presentation displays .
  • Set the presentation connection state of S to connecting .
  • Add S to the set of controlled presentations .
  • If P is provided, resolve P with S .
  • Queue a task to fire an event named connectionavailable , that uses the PresentationConnectionAvailableEvent interface, with the connection attribute initialized to S , at presentationRequest . The event must not bubble and must not be cancelable.
  • Let U be the user agent connected to D.
  • If the next step fails, abort all remaining steps and close the presentation connection S with error as closeReason , and a human readable message describing the failure as closeMessage .
  • Using an implementation specific mechanism, tell U to create a receiving browsing context with D , presentationUrl , and I as parameters.
  • Establish a presentation connection with S .

6.3.5 Reconnecting to a presentation

When the reconnect method is called, the user agent MUST run the following steps to reconnect to a presentation:

  • Return P , but continue running these steps in parallel.
  • Its controlling browsing context is the current browsing context
  • Its presentation connection state is not terminated
  • Its presentation URL is equal to one of the presentation request URLs of presentationRequest
  • Its presentation identifier is equal to presentationId
  • Let existingConnection be that PresentationConnection .
  • Resolve P with existingConnection .
  • If the presentation connection state of existingConnection is connecting or connected , then abort all remaining steps.
  • Set the presentation connection state of existingConnection to connecting .
  • Establish a presentation connection with existingConnection .
  • Its controlling browsing context is not the current browsing context
  • Create a new PresentationConnection newConnection .
  • Set the presentation identifier of newConnection to presentationId .
  • Set the presentation URL of newConnection to the presentation URL of existingConnection .
  • Set the presentation connection state of newConnection to connecting .
  • Add newConnection to the set of controlled presentations .
  • Resolve P with newConnection .
  • Queue a task to fire an event named connectionavailable , that uses the PresentationConnectionAvailableEvent interface, with the connection attribute initialized to newConnection , at presentationRequest . The event must not bubble and must not be cancelable.
  • Establish a presentation connection with newConnection .

6.3.6 Event Handlers

The following are the event handlers (and their corresponding event handler event types) that must be supported, as event handler IDL attributes, by objects implementing the PresentationRequest interface:

Event handler Event handler event type

6.4 Interface PresentationAvailability

A PresentationAvailability object exposes the presentation display availability for a presentation request. The presentation display availability for a PresentationRequest stores whether there is currently any available presentation display for at least one of the presentation request URLs of the request.

The presentation display availability for a presentation request is eligible for garbage collection when no ECMASCript code can observe the PresentationAvailability object.

If the controlling user agent can monitor the list of available presentation displays in the background (without a pending request to start ), the PresentationAvailability object MUST be implemented in a controlling browsing context .

The value attribute MUST return the last value it was set to. The value is initialized and updated by the monitor the list of available presentation displays algorithm.

The onchange attribute is an event handler whose corresponding event handler event type is change .

6.4.1 The set of presentation availability objects

The user agent MUST keep track of the set of presentation availability objects created by the getAvailability method. The set of presentation availability objects is represented as a set of tuples ( A , availabilityUrls ) , initially empty, where:

  • A is a live PresentationAvailability object.
  • availabilityUrls is the list of presentation request URLs for the PresentationRequest when getAvailability was called on it to create A .

6.4.2 The list of available presentation displays

The user agent MUST keep a list of available presentation displays . The list of available presentation displays is represented by a list of tuples (availabilityUrl, display) . An entry in this list means that display is currently an available presentation display for availabilityUrl . This list of presentation displays may be used for starting new presentations, and is populated based on an implementation specific discovery mechanism. It is set to the most recent result of the algorithm to monitor the list of available presentation displays .

While the set of presentation availability objects is not empty, the user agent MAY monitor the list of available presentation displays continuously, so that pages can use the value property of a PresentationAvailability object to offer presentation only when there are available displays. However, the user agent may not support continuous availability monitoring in the background; for example, because of platform or power consumption restrictions. In this case the Promise returned by getAvailability is rejected , and the algorithm to monitor the list of available presentation displays will only run as part of the select a presentation display algorithm.

When the set of presentation availability objects is empty (that is, there are no availabilityUrls being monitored), user agents SHOULD NOT monitor the list of available presentation displays to satisfy the power saving non-functional requirement . To further save power, the user agent MAY also keep track of whether a page holding a PresentationAvailability object is in the foreground. Using this information, implementation specific discovery of presentation displays can be resumed or suspended.

6.4.3 Getting the presentation displays availability information

When the getAvailability method is called, the user agent MUST run the following steps:

  • If there is an unsettled Promise from a previous call to getAvailability on presentationRequest , return that Promise and abort these steps.
  • Otherwise, let P be a new Promise constructed in the JavaScript realm of presentationRequest .
  • Reject P with a NotSupportedError exception.
  • Abort all the remaining steps.
  • Resolve P with the request's presentation display availability .
  • Set the presentation display availability for presentationRequest to a newly created PresentationAvailability object constructed in the JavaScript realm of presentationRequest , and let A be that object.
  • Create a tuple ( A , presentationUrls ) and add it to the set of presentation availability objects .
  • Run the algorithm to monitor the list of available presentation displays . Note The monitoring algorithm must be run at least one more time after the previous step to pick up the tuple that was added to the set of presentation availability objects .
  • Resolve P with A .

6.4.4 Monitoring the list of available presentation displays

If the set of presentation availability objects is non-empty, or there is a pending request to select a presentation display , the user agent MUST monitor the list of available presentation displays by running the following steps:

  • Let availabilitySet be a shallow copy of the set of presentation availability objects .
  • Let A be a newly created PresentationAvailability object.
  • Create a tuple ( A , presentationUrls ) where presentationUrls is the PresentationRequest 's presentation request URLs and add it to availabilitySet .
  • Let newDisplays be an empty list.
  • If the user agent is unable to retrieve presentation displays (e.g., because the user has disabled this capability), then skip the following step.
  • Retrieve presentation displays (using an implementation specific mechanism) and set newDisplays to this list.
  • Set the list of available presentation displays to the empty list.
  • Set previousAvailability to the value of A 's value property.
  • Let newAvailability be false .
  • Insert a tuple (availabilityUrl, display) into the list of available presentation displays , if no identical tuple already exists.
  • Set newAvailability to true .
  • If A 's value property has not yet been initialized, then set A 's value property to newAvailability and skip the following step.
  • Set A 's value property to newAvailability .
  • Fire an event named change at A .

When a presentation display availability object is eligible for garbage collection, the user agent SHOULD run the following steps:

  • Let A be the newly deceased PresentationAvailability object
  • Find and remove any entry ( A , availabilityUrl ) in the set of presentation availability objects .
  • If the set of presentation availability objects is now empty and there is no pending request to select a presentation display , cancel any pending task to monitor the list of available presentation displays for power saving purposes, and set the list of available presentation displays to the empty list.

6.4.5 Interface PresentationConnectionAvailableEvent

A controlling user agent fires an event named connectionavailable on a PresentationRequest when a connection associated with the object is created. It is fired at the PresentationRequest instance, using the PresentationConnectionAvailableEvent interface, with the connection attribute set to the PresentationConnection object that was created. The event is fired for each connection that is created for the controller , either by the controller calling start or reconnect , or by the controlling user agent creating a connection on the controller's behalf via defaultRequest .

A receiving user agent fires an event named connectionavailable on a PresentationReceiver when an incoming connection is created. It is fired at the presentation controllers monitor , using the PresentationConnectionAvailableEvent interface, with the connection attribute set to the PresentationConnection object that was created. The event is fired for all connections that are created when monitoring incoming presentation connections .

The connection attribute MUST return the value it was set to when the PresentationConnection object was created.

When the PresentationConnectionAvailableEvent constructor is called, the user agent MUST construct a new PresentationConnectionAvailableEvent object with its connection attribute set to the connection member of the PresentationConnectionAvailableEventInit object passed to the constructor.

6.5 Interface PresentationConnection

Each presentation connection is represented by a PresentationConnection object. Both the controlling user agent and receiving user agent MUST implement PresentationConnection .

The id attribute specifies the presentation connection 's presentation identifier .

The url attribute specifies the presentation connection 's presentation URL .

The state attribute represents the presentation connection 's current state. It can take one of the values of PresentationConnectionState depending on the connection state:

  • connecting means that the user agent is attempting to establish a presentation connection with the destination browsing context . This is the initial state when a PresentationConnection object is created.
  • connected means that the presentation connection is established and communication is possible.
  • closed means that the presentation connection has been closed, or could not be opened. It may be re-opened through a call to reconnect . No communication is possible.
  • terminated means that the receiving browsing context has been terminated. Any presentation connection to that presentation is also terminated and cannot be re-opened. No communication is possible.

When the close method is called on a PresentationConnection S , the user agent MUST start closing the presentation connection S with closed as closeReason and an empty message as closeMessage .

When the terminate method is called on a PresentationConnection S in a controlling browsing context , the user agent MUST run the algorithm to terminate a presentation in a controlling browsing context using S .

When the terminate method is called on a PresentationConnection S in a receiving browsing context , the user agent MUST run the algorithm to terminate a presentation in a receiving browsing context using S .

The binaryType attribute can take one of the values of BinaryType . When a PresentationConnection object is created, its binaryType attribute MUST be set to the string " arraybuffer ". On getting, it MUST return the last value it was set to. On setting, the user agent MUST set the attribute to the new value.

When the send method is called on a PresentationConnection S , the user agent MUST run the algorithm to send a message through S .

When a PresentationConnection object S is discarded (because the document owning it is navigating or is closed) while the presentation connection state of S is connecting or connected , the user agent MUST start closing the presentation connection S with wentaway as closeReason and an empty closeMessage .

If the user agent receives a signal from the destination browsing context that a PresentationConnection S is to be closed, it MUST close the presentation connection S with closed or wentaway as closeReason and an empty closeMessage .

6.5.1 Establishing a presentation connection

When the user agent is to establish a presentation connection using a presentation connection , it MUST run the following steps:

  • If the presentation connection state of presentationConnection is not connecting , then abort all remaining steps.
  • Request connection of presentationConnection to the receiving browsing context . The presentation identifier of presentationConnection MUST be sent with this request.
  • Set the presentation connection state of presentationConnection to connected .
  • Fire an event named connect at presentationConnection .
  • If the connection cannot be completed, close the presentation connection S with error as closeReason , and a human readable message describing the failure as closeMessage .

6.5.2 Sending a message through PresentationConnection

Let presentation message data be the payload data to be transmitted between two browsing contexts. Let presentation message type be the type of that data, one of text or binary .

When the user agent is to send a message through a presentation connection , it MUST run the following steps:

  • If the state property of presentationConnection is not connected , throw an InvalidStateError exception.
  • If the closing procedure of presentationConnection has started, then abort these steps.
  • Let presentation message type messageType be binary if messageOrData is of type ArrayBuffer , ArrayBufferView , or Blob . Let messageType be text if messageOrData is of type DOMString .
  • Using an implementation specific mechanism, transmit the contents of messageOrData as the presentation message data and messageType as the presentation message type to the destination browsing context .
  • If the previous step encounters an unrecoverable error, then abruptly close the presentation connection presentationConnection with error as closeReason , and a closeMessage describing the error encountered.

To assist applications in recovery from an error sending a message through a presentation connection , the user agent should include details of which attempt failed in closeMessage , along with a human readable string explaining the failure reason. Example renditions of closeMessage :

  • Unable to send text message (network_error): "hello" for DOMString messages, where "hello" is the first 256 characters of the failed message.
  • Unable to send binary message (invalid_message) for ArrayBuffer , ArrayBufferView and Blob messages.

6.5.3 Receiving a message through PresentationConnection

When the user agent has received a transmission from the remote side consisting of presentation message data and presentation message type , it MUST run the following steps to receive a message through a PresentationConnection :

  • If the state property of presentationConnection is not connected , abort these steps.
  • Let event be the result of creating an event using the MessageEvent interface, with the event type message , which does not bubble and is not cancelable.
  • If messageType is text , then initialize event 's data attribute to messageData with type DOMString .
  • If messageType is binary , and binaryType attribute is set to " blob ", then initialize event 's data attribute to a new Blob object with messageData as its raw data.
  • If messageType is binary , and binaryType attribute is set to " arraybuffer ", then initialize event 's data attribute to a new ArrayBuffer object whose contents are messageData .
  • Queue a task to fire event at presentationConnection .

If the user agent encounters an unrecoverable error while receiving a message through presentationConnection , it MUST abruptly close the presentation connection presentationConnection with error as closeReason . It SHOULD use a human readable description of the error encountered as closeMessage .

6.5.4 Interface PresentationConnectionCloseEvent

A PresentationConnectionCloseEvent is fired when a presentation connection enters a closed state. The reason attribute provides the reason why the connection was closed. It can take one of the values of PresentationConnectionCloseReason :

  • error means that the mechanism for connecting or communicating with a presentation entered an unrecoverable error.
  • closed means that either the controlling browsing context or the receiving browsing context that were connected by the PresentationConnection called close() .
  • wentaway means that the browser closed the connection, for example, because the browsing context that owned the connection navigated or was discarded.

When the reason attribute is error , the user agent SHOULD set the message attribute to a human readable description of how the communication channel encountered an error.

When the PresentationConnectionCloseEvent constructor is called, the user agent MUST construct a new PresentationConnectionCloseEvent object, with its reason attribute set to the reason member of the PresentationConnectionCloseEventInit object passed to the constructor, and its message attribute set to the message member of this PresentationConnectionCloseEventInit object if set, to an empty string otherwise.

6.5.5 Closing a PresentationConnection

When the user agent is to start closing a presentation connection , it MUST do the following:

  • If the presentation connection state of presentationConnection is not connecting or connected then abort the remaining steps.
  • Set the presentation connection state of presentationConnection to closed .
  • Start to signal to the destination browsing context the intention to close the corresponding PresentationConnection , passing the closeReason to that context. The user agent does not need to wait for acknowledgement that the corresponding PresentationConnection was actually closed before proceeding to the next step.
  • If closeReason is not wentaway , then locally run the steps to close the presentation connection with presentationConnection , closeReason , and closeMessage .

When the user agent is to close a presentation connection , it MUST do the following:

  • If there is a pending close the presentation connection task for presentationConnection , or a close the presentation connection task has already run for presentationConnection , then abort the remaining steps.
  • If the presentation connection state of presentationConnection is not connecting , connected , or closed , then abort the remaining steps.
  • If the presentation connection state of presentationConnection is not closed , set it to closed .
  • Remove presentationConnection from the set of presentation controllers .
  • Populate the presentation controllers monitor with the set of presentation controllers .
  • Fire an event named close , that uses the PresentationConnectionCloseEvent interface, with the reason attribute initialized to closeReason and the message attribute initialized to closeMessage , at presentationConnection . The event must not bubble and must not be cancelable.

6.5.6 Terminating a presentation in a controlling browsing context

When a controlling user agent is to terminate a presentation in a controlling browsing context using connection , it MUST run the following steps:

  • If the presentation connection state of connection is not connected or connecting , then abort these steps.
  • Set the presentation connection state of known connection to terminated .
  • Fire an event named terminate at known connection .
  • Send a termination request for the presentation to its receiving user agent using an implementation specific mechanism.

6.5.7 Terminating a presentation in a receiving browsing context

When any of the following occur, the receiving user agent MUST terminate a presentation in a receiving browsing context :

  • The receiving user agent is to unload a document corresponding to the receiving browsing context , e.g. in response to a request to navigate that context to a new resource.

This could happen by an explicit user action, or as a policy of the user agent. For example, the receiving user agent could be configured to terminate presentations whose PresentationConnection objects are all closed for 30 minutes.

  • A controlling user agent sends a termination request to the receiving user agent for that presentation.

When a receiving user agent is to terminate a presentation in a receiving browsing context , it MUST run the following steps:

  • Let P be the presentation to be terminated, let allControllers be the set of presentation controllers that were created for P , and connectedControllers an empty list.
  • If the presentation connection state of connection is connected , then add connection to connectedControllers .
  • Set the presentation connection state of connection to terminated .
  • If there is a receiving browsing context for P , and it has a document for P that is not unloaded, unload a document corresponding to that browsing context , remove that browsing context from the user interface and discard it.

Only one termination confirmation needs to be sent per controlling user agent .

6.5.8 Handling a termination confirmation in a controlling user agent

When a receiving user agent is to send a termination confirmation for a presentation P , and that confirmation was received by a controlling user agent , the controlling user agent MUST run the following steps:

  • If the presentation connection state of connection is not connected or connecting , then abort the following steps.
  • Fire an event named terminate at connection .

6.5.9 Event Handlers

The following are the event handlers (and their corresponding event handler event types) that must be supported, as event handler IDL attributes, by objects implementing the PresentationConnection interface:

Event handler Event handler event type

6.6 Interface PresentationReceiver

The PresentationReceiver interface allows a receiving browsing context to access the controlling browsing contexts and communicate with them. The PresentationReceiver interface MUST be implemented in a receiving browsing context provided by a receiving user agent .

On getting, the connectionList attribute MUST return the result of running the following steps:

  • If the presentation controllers promise is not null , return the presentation controllers promise and abort all remaining steps.
  • Otherwise, let the presentation controllers promise be a new Promise constructed in the JavaScript realm of this PresentationReceiver object.
  • Return the presentation controllers promise .
  • If the presentation controllers monitor is not null , resolve the presentation controllers promise with the presentation controllers monitor .

6.6.1 Creating a receiving browsing context

When the user agent is to create a receiving browsing context , it MUST run the following steps:

  • Create a new top-level browsing context C , set to display content on D .
  • Set the session history of C to be the empty list.
  • Set the sandboxed modals flag and the sandboxed auxiliary navigation browsing context flag on C .
  • If the receiving user agent implements [ PERMISSIONS ], set the permission state of all permission descriptor types for C to "denied" .
  • Create a new empty cookie store for C .
  • Create a new empty store for C to hold HTTP authentication states.
  • Create a new empty storage for session storage areas and local storage areas for C .
  • If the receiving user agent implements [ INDEXEDDB ], create a new empty storage for IndexedDB databases for C .
  • If the receiving user agent implements [ SERVICE-WORKERS ], create a new empty list of registered service worker registrations and a new empty set of Cache objects for C .
  • Navigate C to presentationUrl .
  • Start monitoring incoming presentation connections for C with presentationId and presentationUrl .

All child navigables created by the presented document, i.e. that have the receiving browsing context as their top-level browsing context , MUST also have restrictions 2-4 above. In addition, they MUST have the sandboxed top-level navigation without user activation browsing context flag set. All of these browsing contexts MUST also share the same browsing state (storage) for features 5-10 listed above.

When the top-level browsing context attempts to navigate to a new resource and runs the steps to navigate , it MUST follow step 1 to determine if it is allowed to navigate . In addition, it MUST NOT be allowed to navigate itself to a new resource, except by navigating to a fragment identifier or by reloading its document .

This allows the user to grant permission based on the origin of the presentation URL shown when selecting a presentation display .

If the top-level-browsing context was not allowed to navigate , it SHOULD NOT offer to open the resource in a new top-level browsing context , but otherwise SHOULD be consistent with the steps to navigate .

Window clients and worker clients associated with the receiving browsing context and its descendant navigables must not be exposed to service workers associated with each other.

When the receiving browsing context is terminated, any service workers associated with it and the browsing contexts in its descendant navigables MUST be unregistered and terminated. Any browsing state associated with the receiving browsing context and the browsing contexts in its descendant navigables , including session history , the cookie store , any HTTP authentication state, any databases , the session storage areas , the local storage areas , the list of registered service worker registrations and the Cache objects MUST be discarded and not used for any other browsing context .

This algorithm is intended to create a well defined environment to allow interoperable behavior for 1-UA and 2-UA presentations, and to minimize the amount of state remaining on a presentation display used for a 2-UA presentation.

The receiving user agent SHOULD fetch resources in a receiving browsing context with an HTTP Accept-Language header that reflects the language preferences of the controlling user agent (i.e., with the same Accept-Language that the controlling user agent would have sent). This will help the receiving user agent render the presentation with fonts and locale-specific attributes that reflect the user's preferences.

Given the operating context of the presentation display , some Web APIs will not work by design (for example, by requiring user input) or will be obsolete (for example, by attempting window management); the receiving user agent should be aware of this. Furthermore, any modal user interface will need to be handled carefully. The sandboxed modals flag is set on the receiving browsing context to prevent most of these operations.

As noted in Conformance , a user agent that is both a controlling user agent and receiving user agent may allow a receiving browsing context to create additional presentations (thus becoming a controlling browsing context as well). Web developers can use navigator.presentation.receiver to detect when a document is loaded as a receiving browsing context.

6.7 Interface PresentationConnectionList

The connections attribute MUST return the non-terminated set of presentation connections in the set of presentation controllers .

6.7.1 Monitoring incoming presentation connections

When the receiving user agent is to start monitoring incoming presentation connections in a receiving browsing context from controlling browsing contexts , it MUST listen to and accept incoming connection requests from a controlling browsing context using an implementation specific mechanism. When a new connection request is received from a controlling browsing context , the receiving user agent MUST run the following steps:

  • If presentationId and I are not equal, refuse the connection and abort all remaining steps.
  • Set the presentation URL of S to presentationUrl .
  • Establish the connection between the controlling and receiving browsing contexts using an implementation specific mechanism.
  • If connection establishment completes successfully, set the presentation connection state of S to connected . Otherwise, set the presentation connection state of S to closed and abort all remaining steps.
  • Add S to the set of presentation controllers .
  • Let the presentation controllers monitor be a new PresentationConnectionList constructed in the JavaScript realm of the PresentationReceiver object of the receiving browsing context .
  • If the presentation controllers promise is not null , resolve the presentation controllers promise with the presentation controllers monitor .
  • Queue a task to fire an event named connectionavailable , that uses the PresentationConnectionAvailableEvent interface, with the connection attribute initialized to S , at the presentation controllers monitor . The event must not bubble and must not be cancelable.

6.7.2 Event Handlers

The following are the event handlers (and their corresponding event handler event types) that must be supported, as event handler IDL attributes, by objects implementing the PresentationConnectionList interface:

Event handler Event handler event type

7. Security and privacy considerations

7.1 personally identifiable information.

The change event fired on the PresentationAvailability object reveals one bit of information about the presence or absence of a presentation display , often discovered through the browser's local area network. This could be used in conjunction with other information for fingerprinting the user. However, this information is also dependent on the user's local network context, so the risk is minimized.

The API enables monitoring the list of available presentation displays . How the user agent determines the compatibility and availability of a presentation display with a given URL is an implementation detail. If a controlling user agent matches a presentation request URL to a DIAL application to determine its availability, this feature can be used to probe information about which DIAL applications the user has installed on the presentation display without user consent.

7.2 Cross-origin access

A presentation is allowed to be accessed across origins; the presentation URL and presentation identifier used to create the presentation are the only information needed to reconnect to a presentation from any origin in the controlling user agent. In other words, a presentation is not tied to a particular opening origin.

This design allows controlling contexts from different origins to connect to a shared presentation resource. The security of the presentation identifier prevents arbitrary origins from connecting to an existing presentation.

This specification also allows a receiving user agent to publish information about its set of controlled presentations , and a controlling user agent to reconnect to presentations started from other devices. This is possible when the controlling browsing context obtains the presentation URL and presentation identifier of a running presentation from the user, local storage, or a server, and then connects to the presentation via reconnect .

This specification makes no guarantee as to the identity of any party connecting to a presentation. Once connected, the presentation may wish to further verify the identity of the connecting party through application-specific means. For example, the presentation could challenge the controller to provide a token via send that the presentation uses to verify identity and authorization.

7.3 User interface guidelines

When the user is asked permission to use a presentation display during the steps to select a presentation display , the controlling user agent should make it clear what origin is requesting presentation and what origin will be presented.

Display of the origin requesting presentation will help the user understand what content is making the request, especially when the request is initiated from a child navigable . For example, embedded content may try to convince the user to click to trigger a request to start an unwanted presentation.

The sandboxed top-level navigation without user activation browsing context flag is set on the receiving browsing context to enforce that the top-level origin of the presentation remains the same during the lifetime of the presentation.

When a user starts a presentation , the user will begin with exclusive control of the presentation. However, the Presentation API allows additional devices (likely belonging to distinct users) to connect and thereby control the presentation as well. When a second device connects to a presentation, it is recommended that all connected controlling user agents notify their users via the browser chrome that the original user has lost exclusive access, and there are now multiple controllers for the presentation.

In addition, it may be the case that the receiving user agent is capable of receiving user input, as well as acting as a presentation display . In this case, the receiving user agent should notify its user via browser chrome when a receiving browsing context is under the control of a remote party (i.e., it has one or more connected controllers).

7.4 Device Access

The presentation API abstracts away what "local" means for displays, meaning that it exposes network-accessible displays as though they were directly attached to the user's device. The Presentation API requires user permission for a page to access any display to mitigate issues that could arise, such as showing unwanted content on a display viewable by others.

7.5 Temporary identifiers and browser state

The presentation URL and presentation identifier can be used to connect to a presentation from another browsing context. They can be intercepted if an attacker can inject content into the controlling page.

7.6 Private browsing mode and clearing of browsing data

The content displayed on the presentation is different from the controller. In particular, if the user is logged in in both contexts, then logs out of the controlling browsing context , they will not be automatically logged out from the receiving browsing context . Applications that use authentication should pay extra care when communicating between devices.

The set of presentations known to the user agent should be cleared when the user requests to "clear browsing data."

When in private browsing mode ("incognito"), the initial set of controlled presentations in that browsing session must be empty. Any presentation connections added to it must be discarded when the session terminates.

7.7 Messaging between presentation connections

This spec will not mandate communication protocols between the controlling browsing context and the receiving browsing context , but it should set some guarantees of message confidentiality and authenticity between corresponding presentation connections .

A. IDL Index

B.1 terms defined by this specification.

  • 1-UA mode §1.
  • 2-UA mode §1.
  • allowed to navigate §4.
  • available presentation display §6.1
  • binaryType attribute for PresentationConnection §6.5
  • change §6.4
  • close method for PresentationConnection §6.5
  • close a presentation connection §6.5.5
  • enum value for PresentationConnectionState §6.5
  • enum value for PresentationConnectionCloseReason §6.5.4
  • connect §6.5.9
  • "connected" enum value for PresentationConnectionState §6.5
  • "connecting" enum value for PresentationConnectionState §6.5
  • attribute for PresentationConnectionAvailableEvent §6.4.5
  • member for PresentationConnectionAvailableEventInit §6.4.5
  • connectionavailable §6.3.6
  • connectionList attribute for PresentationReceiver §6.6
  • connections attribute for PresentationConnectionList §6.7
  • for PresentationRequest §6.3
  • for PresentationConnectionAvailableEvent §6.4.5
  • for PresentationConnectionCloseEvent §6.5.4
  • controlling browsing context §6.1
  • Controlling user agent §3.1
  • create a receiving browsing context §6.6.1
  • creating a new browsing context §4.
  • database §4.
  • default presentation request §6.1
  • defaultRequest attribute for Presentation §6.2.1
  • destination browsing context §6.1
  • "error" enum value for PresentationConnectionCloseReason §6.5.4
  • establish a presentation connection §6.5.1
  • getAvailability method for PresentationRequest §6.4.3
  • id attribute for PresentationConnection §6.5
  • list of available presentation displays §6.4.2
  • local storage area §4.
  • attribute for PresentationConnectionCloseEvent §6.5.4
  • member for PresentationConnectionCloseEventInit §6.5.4
  • monitor the list of available presentation displays §6.4.4
  • monitoring incoming presentation connections §6.7.1
  • navigating to a fragment identifier §4.
  • onchange attribute for PresentationAvailability §6.4
  • onclose attribute for PresentationConnection §6.5.9
  • onconnect attribute for PresentationConnection §6.5.9
  • attribute for PresentationRequest §6.3.6
  • attribute for PresentationConnectionList §6.7.2
  • onmessage attribute for PresentationConnection §6.5.9
  • onterminate attribute for PresentationConnection §6.5.9
  • parse a url §4.
  • presentation attribute for Navigator §6.2
  • Presentation interface §6.2
  • presentation connection §6.1
  • presentation connection state §6.1
  • presentation controllers monitor §6.1
  • presentation controllers promise §6.1
  • presentation display §6.1
  • presentation display availability §6.4
  • presentation identifier §6.1
  • presentation message data §6.5.2
  • presentation message type §6.5.2
  • presentation request URLs §6.3
  • presentation URL §6.1
  • PresentationAvailability interface §6.4
  • PresentationConnection interface §6.5
  • PresentationConnectionAvailableEvent interface §6.4.5
  • PresentationConnectionAvailableEventInit dictionary §6.4.5
  • PresentationConnectionCloseEvent interface §6.5.4
  • PresentationConnectionCloseEventInit dictionary §6.5.4
  • PresentationConnectionCloseReason enum §6.5.4
  • PresentationConnectionList interface §6.7
  • PresentationConnectionState enum §6.5
  • PresentationReceiver interface §6.6
  • PresentationRequest interface §6.3
  • receive a message §6.5.3
  • receiver attribute for Presentation §6.2.2
  • receiving browsing context §6.1
  • Receiving user agent §3.1
  • reconnect method for PresentationRequest §6.3.5
  • reload a document §4.
  • select a presentation display §6.3.2
  • send method for PresentationConnection §6.5
  • send a message §6.5.2
  • Send a termination request §6.5.6
  • session history §4.
  • session storage area §4.
  • set of controlled presentations §6.1
  • set of presentation availability objects §6.4.1
  • set of presentation controllers §6.1
  • start method for PresentationRequest §6.3.2
  • start a presentation connection §6.3.4
  • start a presentation from a default presentation request §6.3.3
  • start closing a presentation connection §6.5.5
  • state attribute for PresentationConnection §6.5
  • terminate method for PresentationConnection §6.5
  • terminate a presentation in a controlling browsing context §6.5.6
  • terminate a presentation in a receiving browsing context §6.5.7
  • "terminated" enum value for PresentationConnectionState §6.5
  • unload a document §4.
  • url attribute for PresentationConnection §6.5
  • user agents §3.1
  • valid presentation identifier §6.1
  • value attribute for PresentationAvailability §6.4
  • "wentaway" enum value for PresentationConnectionCloseReason §6.5.4

B.2 Terms defined by reference

  • creating an event
  • Event interface
  • EventTarget interface
  • fire an event
  • JavaScript realm
  • Blob interface
  • active sandboxing flag set (for Document )
  • active window (for navigable )
  • browsing context
  • child navigable
  • current settings object
  • descendant navigables (for Document )
  • event handler
  • event handler event type
  • EventHandler
  • in parallel
  • localStorage attribute (for WindowLocalStorage )
  • MessageEvent interface
  • Navigator interface
  • Queue a task
  • reload() (for Location )
  • sandboxed auxiliary navigation browsing context flag
  • sandboxed modals flag
  • sandboxed presentation browsing context flag
  • sandboxed top-level navigation without user activation browsing context flag
  • sessionStorage attribute (for WindowSessionStorage )
  • task source
  • top-level browsing context
  • transient activation
  • permission descriptor types (for powerful feature)
  • permission state
  • cookie store
  • Accept-Language
  • HTTP authentication
  • potentially trustworthy URL
  • Cache interface
  • service worker registrations
  • service workers
  • window client (for service worker client)
  • worker client (for service worker client)
  • ArrayBuffer interface
  • ArrayBufferView
  • boolean type
  • DOMString interface
  • [Exposed] extended attribute
  • FrozenArray interface
  • InvalidAccessError exception
  • InvalidStateError exception
  • NotAllowedError exception
  • NotFoundError exception
  • NotSupportedError exception
  • OperationError exception
  • Promise interface
  • [SameObject] extended attribute
  • [SecureContext] extended attribute
  • SecurityError exception
  • SyntaxError exception
  • throw (for exception )
  • undefined type
  • USVString interface
  • RTCDataChannel interface
  • BinaryType enum

C. Acknowledgments

Thanks to Addison Phillips, Anne Van Kesteren, Anssi Kostiainen, Anton Vayvod, Chris Needham, Christine Runnegar, Daniel Davis, Domenic Denicola, Erik Wilde, François Daoust, 闵洪波 (Hongbo Min), Hongki CHA, Hubert Sablonnière, Hyojin Song, Hyun June Kim, Jean-Claude Dufourd, Joanmarie Diggs, Jonas Sicking, Louay Bassbouss, Mark Watson, Martin Dürst, Matt Hammond, Mike West, Mounir Lamouri, Nick Doty, Oleg Beletski, Philip Jägenstedt, Richard Ishida, Shih-Chiang Chien, Takeshi Kanai, Tobie Langel, Tomoyuki Shimizu, Travis Leithead, and Wayne Carr for help with editing, reviews and feedback to this draft.

AirPlay , HDMI , Chromecast , DLNA and Miracast are registered trademarks of Apple Inc., HDMI Licensing LLC., Google Inc., the Digital Living Network Alliance, and the Wi-Fi Alliance, respectively. They are only cited as background information and their use is not required to implement the specification.

D. Candidate Recommendation exit criteria

For this specification to be advanced to Proposed Recommendation, there must be, for each of the conformance classes it defines ( controlling user agent and receiving user agent ), at least two independent, interoperable implementations of each feature. Each feature may be implemented by a different set of products, there is no requirement that all features be implemented by a single product. Additionally, implementations of the controlling user agent conformance class must include at least one implementation of the 1-UA mode , and one implementation of the 2-UA mode . 2-UA mode implementations may only support non http/https presentation URLs. Implementations of the receiving user agent conformance class may not include implementations of the 2-UA mode .

The API was recently restricted to secure contexts. Deprecation of the API in non secure contexts in early implementations takes time. The group may request transition to Proposed Recommendation with implementations that still expose the API in non secure contexts, provided there exists a timeline to restrict these implementations in the future.

For the purposes of these criteria, we define the following terms:

  • implements one of the conformance classes of the specification.
  • is available to the general public. The implementation may be a shipping product or other publicly available version (i.e., beta version, preview release, or "nightly build"). Non-shipping product releases must have implemented the feature(s) for a period of at least one month in order to demonstrate stability.
  • is not experimental (i.e. a version specifically designed to pass the test suite and not intended for normal usage going forward).

E. Change log

This section lists changes made to the spec since it was first published as Candidate Recommendation in July 2016, with links to related issues on the group's issue tracker.

E.1 Changes since 01 June 2017

  • Added a note about receiving browsing contexts starting presentations ( #487 )
  • Removed the definition of the BinaryType enum ( #473 )
  • Updated WebIDL to use constructor operations ( #469 )
  • Clarified how receiving browsing contexts are allowed to navigate ( #461 )
  • Added explanatory text to the sample code ( #460 )
  • Added sample code that starts a second presentation from the same controller ( #453 )
  • Updated the steps to construct a PresentationRequest to ignore a URL with an unsupported scheme ( #447 )
  • Clarified restrictions on navigation in receiving browsing contexts ( #434 )
  • Updated WebIDL to use [Exposed=Window] ( #438 )
  • Various editorial updates ( #429 , #431 , #432 , #433 , #441 , #442 , #443 , #454 , #465 , #482 , #483 , #486 )

E.2 Changes since 14 July 2016

  • Fixed document license ( #428 )
  • Updated termination algorithm to also discard the receiving browsing context and allow termination in a connecting state ( #421 , #423 )
  • Dropped sandboxing section, now integrated in HTML ( #437 in the Web Platform Working Group issue tracker)
  • Relaxed exit criteria to match known implementations plans ( #406 )
  • The sandboxed top-level navigation browsing context flag and the sandboxed modals flag are now set on the receiving browsing context to prevent top-level navigation and the ability to spawn new browsing contexts ( #414 )
  • Moved sandboxing flag checks to PresentationRequest constructor ( #379 , #398 )
  • Updated normative references to target stable specifications ( #295 , #396 )
  • Made display selection algorithm reject in ancestor and descendant browsing context ( #394 )
  • Renamed PresentationConnectionClosedReason to PresentationConnectionCloseReason ( #393 )
  • Fixed getAvailability and monitoring algorithms ( #335 , #381 , #382 , #383 , #387 , #388 , #392 )
  • Assigned correct JavaScript realm to re-used objects ( #391 )
  • API now restricted to secure contexts ( #380 )
  • Set the state of receiving presentation connections to terminated before unload ( #374 )
  • Defined environment for nested contexts of the receiving browsing context ( #367 )
  • Removed [SameObject] from Presentation.receiver and PresentationReceiver.connectionList ( #365 , #407 )
  • Replaced DOMString with USVString for PresentationRequest URLs ( #361 )
  • Added a presentation task source for events ( #360 )
  • Changed normative language around UUID generation ( #346 )
  • Added failure reason to close message ( #344 )
  • Added error handling to establish a presentation connection algorithm ( #343 )
  • Made navigator.presentation mandatory ( #341 )
  • Used current settings object in steps that require a settings object ( #336 )
  • Updated security check step to handle multiple URLs case ( #329 )
  • Made PresentationConnection.id mandatory ( #325 )
  • Renamed PresentationConnectionClosedEvent to PresentationConnectionCloseEvent ( #324 )
  • Added an implementation note for advertising and rendering a user friendly display name ( #315 )
  • Added note for presentation detection ( #303 )
  • Various editorial updates ( #334 , #337 , #339 , #340 , #342 , #345 , #359 , #363 , #366 , #397 )

F. References

F.1 normative references, f.2 informative references.

Referenced in:

  • § 1. Introduction
  • § 3.1 Conformance classes
  • § 6.6.1 Creating a receiving browsing context
  • § D. Candidate Recommendation exit criteria
  • § 6.6.1 Creating a receiving browsing context (2)
  • § D. Candidate Recommendation exit criteria (2) (3)
  • § 6.3.2 Selecting a presentation display (2)
  • § 6.3.4 Starting a presentation connection
  • § 6.3.5 Reconnecting to a presentation
  • § 6.4.1 The set of presentation availability objects
  • § 6.4.2 The list of available presentation displays (2) (3) (4)
  • § 6.4.4 Monitoring the list of available presentation displays (2) (3)
  • § 6.4.5 Interface PresentationConnectionAvailableEvent
  • § 6.5 Interface PresentationConnection (2) (3) (4) (5) (6)
  • § 6.5.1 Establishing a presentation connection
  • § 6.5.2 Sending a message through PresentationConnection
  • § 6.5.3 Receiving a message through PresentationConnection (2)
  • § 6.5.4 Interface PresentationConnectionCloseEvent
  • § 6.5.5 Closing a PresentationConnection (2)
  • § 3.1 Conformance classes (2) (3)
  • § 6.1 Common idioms (2)
  • § 6.2.1 Controlling user agent (2) (3) (4)
  • § 6.3 Interface PresentationRequest
  • § 6.3.1 Constructing a PresentationRequest (2)
  • § 6.3.3 Starting a presentation from a default presentation request
  • § 6.4 Interface PresentationAvailability
  • § 6.4.4 Monitoring the list of available presentation displays
  • § 6.4.5 Interface PresentationConnectionAvailableEvent (2)
  • § 6.5 Interface PresentationConnection
  • § 6.5.6 Terminating a presentation in a controlling browsing context (2)
  • § 6.5.7 Terminating a presentation in a receiving browsing context (2) (3)
  • § 6.5.8 Handling a termination confirmation in a controlling user agent (2)
  • § 7.1 Personally identifiable information
  • § 7.2 Cross-origin access
  • § 7.3 User interface guidelines (2)
  • § D. Candidate Recommendation exit criteria (2)
  • § 6.1 Common idioms
  • § 6.2.2 Receiving user agent (2)
  • § 6.5.6 Terminating a presentation in a controlling browsing context
  • § 6.5.7 Terminating a presentation in a receiving browsing context (2) (3) (4) (5) (6)
  • § 6.5.8 Handling a termination confirmation in a controlling user agent
  • § 6.6 Interface PresentationReceiver
  • § 6.6.1 Creating a receiving browsing context (2) (3) (4) (5) (6) (7)
  • § 6.7.1 Monitoring incoming presentation connections (2)
  • § 6.3.1 Constructing a PresentationRequest
  • § 6.5.7 Terminating a presentation in a receiving browsing context (2)
  • § 1. Introduction (2) (3) (4) (5) (6)
  • § 5.1 Monitoring availability of presentation displays
  • § 6.3.2 Selecting a presentation display
  • § 6.3.3 Starting a presentation from a default presentation request (2)
  • § 6.4.2 The list of available presentation displays (2)
  • § 6.4.3 Getting the presentation displays availability information
  • § 6.4.4 Monitoring the list of available presentation displays (2) (3) (4)
  • § 6.6.1 Creating a receiving browsing context (2) (3)
  • § 7.1 Personally identifiable information (2) (3)
  • § 6.1 Common idioms (2) (3) (4) (5) (6) (7) (8) (9)
  • § 6.3.4 Starting a presentation connection (2)
  • § 6.5 Interface PresentationConnection (2) (3) (4) (5) (6) (7)
  • § 6.5.2 Sending a message through PresentationConnection (2) (3) (4)
  • § 6.5.3 Receiving a message through PresentationConnection
  • § 6.7 Interface PresentationConnectionList
  • § 7.6 Private browsing mode and clearing of browsing data
  • § 7.7 Messaging between presentation connections
  • § 6.3.5 Reconnecting to a presentation (2) (3) (4) (5)
  • § 6.5.1 Establishing a presentation connection (2)
  • § 6.5.5 Closing a PresentationConnection (2) (3) (4)
  • § 6.5.6 Terminating a presentation in a controlling browsing context (2) (3)
  • § 6.1 Common idioms (2) (3)
  • § 6.3.4 Starting a presentation connection (2) (3)
  • § 6.3.5 Reconnecting to a presentation (2) (3) (4)
  • § 6.7.1 Monitoring incoming presentation connections (2) (3)
  • § 7.2 Cross-origin access (2)
  • § 7.5 Temporary identifiers and browser state
  • § 6.1 Common idioms (2) (3) (4)
  • § 6.7.1 Monitoring incoming presentation connections
  • § 6.4.2 The list of available presentation displays
  • § 1. Introduction (2) (3)
  • § 3.1 Conformance classes (2)
  • § 6.1 Common idioms (2) (3) (4) (5) (6) (7) (8)
  • § 6.3 Interface PresentationRequest (2)
  • § 6.3.5 Reconnecting to a presentation (2)
  • § 6.7.1 Monitoring incoming presentation connections (2) (3) (4)
  • § 6.1 Common idioms (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
  • § 6.2.2 Receiving user agent (2) (3)
  • § 6.5 Interface PresentationConnection (2) (3)
  • § 6.5.5 Closing a PresentationConnection
  • § 6.6 Interface PresentationReceiver (2)
  • § 6.5 Interface PresentationConnection (2)
  • § 6.5.7 Terminating a presentation in a receiving browsing context
  • § 6.3.5 Reconnecting to a presentation (2) (3)
  • § 6.7.1 Monitoring incoming presentation connections (2) (3) (4) (5) (6)
  • § 6.6 Interface PresentationReceiver (2) (3) (4) (5)
  • § 6.2.1 Controlling user agent (2) (3) (4) (5)
  • § 6.2 Interface Presentation (2) (3) (4)
  • § 6.2.1 Controlling user agent
  • § 6.2.2 Receiving user agent
  • § A. IDL Index (2) (3) (4)
  • § 6.2 Interface Presentation
  • § A. IDL Index
  • § 5.4 Starting a presentation by the controlling user agent
  • § 6.2.1 Controlling user agent (2)
  • § 5.2 Starting a new presentation (2)
  • § 6.3 Interface PresentationRequest (2) (3) (4) (5)
  • § 6.3.1 Constructing a PresentationRequest (2) (3) (4)
  • § 6.3.6 Event Handlers
  • § A. IDL Index (2)
  • Not referenced in this document.
  • § 7.3 User interface guidelines
  • § 6.4 Interface PresentationAvailability (2) (3) (4)
  • § 6.4.4 Monitoring the list of available presentation displays (2)
  • § 6.4 Interface PresentationAvailability (2)
  • § 6.4.3 Getting the presentation displays availability information (2) (3)
  • § 6.4.3 Getting the presentation displays availability information (2)
  • § 6.4.1 The set of presentation availability objects (2)
  • § 6.4.5 Interface PresentationConnectionAvailableEvent (2) (3) (4) (5)
  • § 6.4.5 Interface PresentationConnectionAvailableEvent (2) (3) (4)
  • § 5.2 Starting a new presentation
  • § 5.3 Reconnecting to an existing presentation (2)
  • § 5.5 Monitoring the connection state and exchanging data (2)
  • § 6.3.5 Reconnecting to a presentation (2) (3) (4) (5) (6) (7)
  • § 6.5 Interface PresentationConnection (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
  • § 6.5.5 Closing a PresentationConnection (2) (3)
  • § 6.5.9 Event Handlers
  • § A. IDL Index (2) (3) (4) (5) (6)
  • § 5.3 Reconnecting to an existing presentation
  • § 5.5 Monitoring the connection state and exchanging data
  • § 6.5 Interface PresentationConnection (2) (3) (4)
  • § 6.5.2 Sending a message through PresentationConnection (2)
  • § 6.5.4 Interface PresentationConnectionCloseEvent (2) (3) (4)
  • § 6.5.4 Interface PresentationConnectionCloseEvent (2) (3)
  • § A. IDL Index (2) (3)
  • § 6.5.4 Interface PresentationConnectionCloseEvent (2)
  • § 6.6 Interface PresentationReceiver (2) (3) (4)
  • § 5.6 Listening for incoming presentation connections
  • § 6.7.2 Event Handlers
  • § 4. Terminology
  • § 7.1 Personally identifiable information (2)
  • § 4. Terminology (2)
  • § 6.6.1 Creating a receiving browsing context (2) (3) (4)
  • § A. IDL Index (2) (3) (4) (5) (6) (7)
  • § A. IDL Index (2) (3) (4) (5)
  • § A. IDL Index (2) (3) (4) (5) (6) (7) (8)
  • § 6.3 Interface PresentationRequest (2) (3)
  • § 6.3.2 Selecting a presentation display (2) (3) (4) (5)
  • § 6.4.3 Getting the presentation displays availability information (2) (3) (4)
  • § 6.2 Interface Presentation (2)
  • § A. IDL Index (2) (3) (4) (5) (6) (7) (8) (9)

Presentation API Demonstration

HTML Slidy remote A Presentation API demo

François Daoust, [email protected] , @tidoust

What is the Presentation API?

  • The Presentation API defines an API that allows a web application to request display of web content on a connected display
  • Targeted displays include those connected through HDMI, VGA, Miracast, WiDi, Airplay, Google Cast, etc.
  • The Second Screen Presentation Community Group develops the application
  • The Presentation API is not yet stable
  • No Web browser implements the Presentation API at this stage

JavaScript shim

The JavaScript shim featured in the HTML Slidy demo extends the one of the first demo :

  • Message passing is used to communicate between the sender and the receiver.
  • Google Cast devices are supported, provided the Google Cast extension is installed

Restricted to Google Chrome/Chromium

Support for actual second screens either requires:

  • the modified version of Chromium provided by Intel's Open Source Technology Center

The demo falls back to opening the slide show in a separate window when it cannot detect a second screen.

Origin restrictions

The receiver side opens up the requested slide show in a child iframe. To control that iframe, the slide show must be served from the same origin as the receiver app.

The demo only knows about two receiver apps:

  • https://webscreens.github.io /slidyremote/receiver.html
  • https://www.w3.org /2014/secondscreen/demo/slidyremote/receiver.html

Receiver apps have been published as custom Google Cast receiver apps. As such, they may run on any Google Cast device.

Google requires that receiver apps be served over HTTPS.

How the demo works: sender side

When the user enters the URL of a slide show, the demo page:

  • checks its origin and rejects unknown ones;
  • calls navigator.presentation.requestSession with the appropriate receiver app;
  • uses the returned PresentationSession object to tell the receiver app to load the slideshow;
  • displays the Slidy remote
  • sends all Slidy commands as PresentationSession messages to the receiver app

How the demo work: receiver side

When the receiver app is loaded, it:

  • listens to navigator.presentation.onmessage events
  • loads the appropriate slide show in a child iframe when so requested
  • converts Slidy command messages to actual Slidy functions calls in the child iframe

Presentation Controller API (Google Cast) Sample

Available in Chrome 63+ | View on GitHub | Browse Samples

This sample illustrates the use of Presentation API , which gives the ability to access external presentation-type displays and use them for presenting web content. The PresentationRequest object is associated with a request to initiate to a presentation made by a controlling browsing context and it takes in a presentation request URL when constructed. A presentation can be started by calling the start() method on the PresentationRequest object.

Note that this demo uses a cast: URL to start the presentation instead of the receiver page's URL. This will load the receiver page on a Chromecast, but the sender page will be unable to communicate with it as the Chromecast does not implement the Presentation Receiver API.

presentationRequest.start() presentationConnection.close() presentationConnection.terminate()

Live Output

Javascript snippet.

Presentation API 3.0

Status of this document.

This Version: 3.0.0

Latest Stable Version: 3.0.0

Previous Version: 2.1.1

Michael Appleby , Yale University

Tom Crane , Digirati

Robert Sanderson , J. Paul Getty Trust

Jon Stroop , Princeton University Library

Simeon Warner , Cornell University

Copyright © 2012-2024 Editors and contributors. Published by the IIIF Consortium under the CC-BY license, see disclaimer .

1. Introduction

Access to digital representations of objects is a fundamental requirement for many research activities, for the transmission of cultural knowledge, and for the daily pursuits of every web citizen. Ancient scrolls, paintings, letters, books, newspapers, films, operas, albums, field recordings, and computer generated animations are compound objects: they can have many parts, and complex structures. These resources may also bear the written or spoken word, and this linguistic content is often as important as the visual or audible representation.

Collections of both digitized physical objects and much born-digital content benefit from a standardized description of their structure, layout, and presentation mode. This document specifies this standardized description of the collection or compound object, using a JSON format. Many different rich and dynamic user experiences can be implemented, presenting content from across collections and institutions.

A compound object may comprise a series of pages, surfaces, or extents of time; for example the single view of a painting, the two sides of a photograph, four cardinal views of a statue, the many pages of an edition of a newspaper or book, or the duration of an act of an opera. This specification addresses how to provide an order for these views or extents, the references to the resources needed to present them, and the descriptive information needed to allow the user to understand what is being seen or heard.

The principles of Linked Data and the Architecture of the Web are adopted in order to provide a distributed and interoperable framework. The Shared Canvas data model and JSON-LD are leveraged to create an easy-to-implement, JSON -based format.

Please send feedback to [email protected]

1.1. Objectives and Scope

The objective of the IIIF (pronounced “Triple-Eye-Eff”) Presentation API is to provide the information necessary to allow a rich, online viewing environment for compound digital objects to be presented to a human user, often in conjunction with the IIIF Image API . This is the sole purpose of the API and therefore descriptive information is given in a way that is intended for humans to read, but not semantically available to machines. In particular, it explicitly does not aim to provide metadata that would allow a search engine to index digital objects.

Implementations of this specification will be able to:

  • display to the user digitized images, video, audio, and other content types associated with a particular physical or born-digital object;
  • allow the user to navigate between multiple views or time extents of the object, either sequentially or hierarchically;
  • display descriptive information about the object, view or navigation structure to provide context to the user;
  • and provide a shared environment in which both publishers and users can annotate the object and its content with additional information.

The following are not in scope:

  • Provision of metadata for harvesting and discovery is not directly supported. Properties to reference further descriptive resources are available, and their use is encouraged.
  • Search within the object, which is described by the IIIF Content Search API .

This document is accompanied by the Presentation API Cookbook , which demonstrates motivating use cases for IIIF and contains examples showing how the objectives may be achieved.

1.2. Terminology

This specification uses the following terms:

  • embedded : When a resource (A) is embedded within an embedding resource (B), the complete JSON representation of resource A is present within the JSON representation of resource B, and dereferencing the URI of resource A will not result in additional information. Example: Canvas A is embedded in Manifest B.
  • referenced : When a resource (A) is referenced from a referencing resource (B), an incomplete JSON representation of resource A is present within the JSON representation of resource B, and dereferencing the URI of resource A will result in additional information. Example: Manifest A is referenced from Collection B.
  • HTTP (S) : The HTTP or HTTPS URI scheme and internet protocol.

The terms array , JSON object , number , string , and boolean in this document are to be interpreted as defined by the Javascript Object Notation ( JSON ) specification.

The key words must , must not , required , shall , shall not , should , should not , recommended , may , and optional in this document are to be interpreted as described in RFC 2119 .

2. Resource Type Overview

The objectives described above require a model in which one can characterize the compound object (via the Manifest resource) and the individual views of the object ( Canvas resources). Each view may reference images, audio, video and other content resources to allow it to be rendered appropriately. A compound object may also have sections; for example, a book may have chapters of several pages, or a play might be divided into acts and scenes ( Range resources) and there may be groups of such objects ( Collection resources). These resource types, along with their properties, make up the IIIF Presentation API .

This section provides an overview of the resource types (or classes) that are used in the specification. They are each presented in more detail in Section 5 .

2.1. Defined Types

This specification defines the following resource types:

Data Model

An ordered list of Manifests, and/or further Collections. Collections allow Manifests and child Collections to be grouped in a hierarchical structure for presentation, which can be for generating navigation, showing dynamic results from a search, or providing fixed sets of related resources for any other purpose.

A description of the structure and properties of the compound object. It carries information needed for the client to present the content to the user, such as a title and other descriptive information about the object or the intellectual work that it conveys. Each Manifest usually describes how to present a single compound object such as a book, a statue or a music album.

A virtual container that represents a particular view of the object and has content resources associated with it or with parts of it. The Canvas provides a frame of reference for the layout of the content, both spatially and temporally. The concept of a Canvas is borrowed from standards like PDF and HTML , or applications like Photoshop and PowerPoint, where an initially blank display surface has images, video, text and other content “painted” on to it by Annotations, collected in Annotation Pages.

An ordered list of Canvases, and/or further Ranges. Ranges allow Canvases, or parts thereof, to be grouped together in some way. This could be for content-based reasons, such as might be described in a table of contents or the set of scenes in a play. Equally, physical features might be important such as page gatherings in an early book, or when recorded music is split across different physical carriers such as two CDs.

2.2. Additional Types

This specification makes use of types defined in the Web Annotation Data Model specification, in particular the following:

Annotation Page

An ordered list of Annotations that is typically associated with a Canvas but may be referenced from other types of resource as well. Annotation Pages collect and order lists of Annotations, which in turn provide commentary about a resource or content that is part of a Canvas.

Annotations associate content resources with Canvases. The same mechanism is used for the visible and/or audible resources as is used for transcriptions, commentary, tags and other content. This provides a single, unified method for aligning information, and provides a standards-based framework for distinguishing parts of resources and parts of Canvases. As Annotations can be added later, it promotes a distributed system in which publishers can align their content with the descriptions created by others. Annotation related functionality may also rely on further classes such as SpecificResource, Choice or Selectors.

Web resources such as images, audio, video, or text which are associated with a Canvas via an Annotation, or provide a representation of any resource.

Annotation Collection

An ordered list of Annotation Pages. Annotation Collections allow higher level groupings of Annotations to be recorded. For example, all of the English translation Annotations of a medieval French document could be kept separate from the transcription or an edition in modern French, or the director’s commentary on a film can be separated from the script.

3. Resource Properties

Most of the properties defined by this specification may be associated with any of the resource types described above, and may have more than one value. Properties relate to the resource with which they are associated, so the label property on a Manifest is the human readable label of the Manifest, whereas the same label property on a Canvas is the human readable label for that particular view.

The requirements for which classes have which properties are summarized in Appendix A .

Other properties are allowed, either via local extensions or those endorsed by the IIIF community. If a client discovers properties that it does not understand, then it must ignore them. See the Linked Data Context and Extensions section for more information about extensions.

This section also defines processing requirements for clients for each of the combinations of class and property. These requirements are for general purpose client implementations that are intended to be used to render the entire resource to the user, and not necessarily for consuming applications with specialized use or individual component implementations that might be used to construct a client. The inclusion of these requirements gives publishers a baseline expectation for how they can expect implementations advertised as compliant with this specification to behave when processing their content.

3.1. Descriptive Properties

These properties describe or represent the resource they are associated with, and are typically rendered to the user.

A human readable label, name or title. The label property is intended to be displayed as a short, textual surrogate for the resource if a human needs to make a distinction between it and similar resources, for example between objects, pages, or options for a choice of images to display. The label property can be fully internationalized, and each language can have multiple values. This pattern is described in more detail in the languages section.

The value of the property must be a JSON object, as described in the languages section.

  • A Collection must have the label property with at least one entry. Clients must render label on a Collection.
  • A Manifest must have the label property with at least one entry. Clients must render label on a Manifest.
  • A Canvas should have the label property with at least one entry. Clients must render label on a Canvas, and should generate a label for Canvases that do not have them.
  • A content resource may have the label property with at least one entry. If there is a Choice of content resource for the same Canvas, then they should each have at least the label property with at least one entry. Clients may render label on content resources, and should render them when part of a Choice.
  • A Range should have the label property with at least one entry. Clients must render label on a Range.
  • An Annotation Collection should have the label property with at least one entry. Clients should render label on an Annotation Collection.
  • Other types of resource may have the label property with at least one entry. Clients may render label on other types of resource.

An ordered list of descriptions to be displayed to the user when they interact with the resource, given as pairs of human readable label and value entries. The content of these entries is intended for presentation only; descriptive semantics should not be inferred. An entry might be used to convey information about the creation of the object, a physical description, ownership information, or other purposes.

The value of the metadata property must be an array of JSON objects, where each item in the array has both label and value properties. The values of both label and value must be JSON objects, as described in the languages section.

  • A Collection should have the metadata property with at least one item. Clients must render metadata on a Collection.
  • A Manifest should have the metadata property with at least one item. Clients must render metadata on a Manifest.
  • A Canvas may have the metadata property with at least one item. Clients should render metadata on a Canvas.
  • Other types of resource may have the metadata property with at least one item. Clients may render metadata on other types of resource.

Clients should display the entries in the order provided. Clients should expect to encounter long texts in the value property, and render them appropriately, such as with an expand button, or in a tabbed interface.

A short textual summary intended to be conveyed to the user when the metadata entries for the resource are not being displayed. This could be used as a brief description for item level search results, for small-screen environments, or as an alternative user interface when the metadata property is not currently being rendered. The summary property follows the same pattern as the label property described above.

  • A Collection should have the summary property with at least one entry. Clients should render summary on a Collection.
  • A Manifest should have the summary property with at least one entry. Clients should render summary on a Manifest.
  • A Canvas may have the summary property with at least one entry. Clients should render summary on a Canvas.
  • Other types of resource may have the summary property with at least one entry. Clients may render summary on other types of resource.

requiredStatement

Text that must be displayed when the resource is displayed or used. For example, the requiredStatement property could be used to present copyright or ownership statements, an acknowledgement of the owning and/or publishing institution, or any other text that the publishing organization deems critical to display to the user. Given the wide variation of potential client user interfaces, it will not always be possible to display this statement to the user in the client’s initial state. If initially hidden, clients must make the method of revealing it as obvious as possible.

The value of the property must be a JSON object, that has the label and value properties, in the same way as a metadata property entry. The values of both label and value must be JSON objects, as described in the languages section.

  • Any resource type may have the requiredStatement property. Clients must render requiredStatement on every resource type.

A string that identifies a license or rights statement that applies to the content of the resource, such as the JSON of a Manifest or the pixels of an image. The value must be drawn from the set of Creative Commons license URIs, the RightsStatements.org rights statement URIs, or those added via the extension mechanism. The inclusion of this property is informative, and for example could be used to display an icon representing the rights assertions.

If displaying rights information directly to the user is the desired interaction, or a publisher-defined label is needed, then it is recommended to include the information using the requiredStatement property or in the metadata property.

The value must be a string. If the value is drawn from Creative Commons or RightsStatements.org, then the string must be a URI defined by that specification.

  • Any resource type may have the rights property. Clients may render rights on any resource type.

Machine actionable URIs and links for users The machine actionable URIs for both Creative Commons licenses and RightsStatements.org right statements are http URIs. In both cases, human readable descriptions are available from equivalent https URIs. Clients may wish to rewrite links presented to users to use these equivalent https URIs.

An organization or person that contributed to providing the content of the resource. Clients can then display this information to the user to acknowledge the provider’s contributions. This differs from the requiredStatement property, in that the data is structured, allowing the client to do more than just present text but instead have richer information about the people and organizations to use in different interfaces.

The organization or person is represented as an Agent resource.

  • Agents must have the id property, and its value must be a string. The string must be a URI that identifies the agent.
  • Agents must have the type property, and its value must be the string “Agent”.
  • Agents must have the label property, and its value must be a JSON object as described in the languages section.
  • Agents should have the homepage property, and its value must be an array of JSON objects as described in the homepage section.
  • Agents should have the logo property, and its value must be an array of JSON objects as described in the logo section.
  • Agents may have the seeAlso property, and its value must be an array of JSON object as described in the seeAlso section.

The value must be an array of JSON objects, where each item in the array conforms to the structure of an Agent, as described above.

  • A Collection should have the provider property with at least one item. Clients must render provider on a Collection.
  • A Manifest should have the provider property with at least one item. Clients must render provider on a Manifest.
  • Other types of resource may have the provider property with at least one item. Clients should render provider on other types of resource.

A content resource, such as a small image or short audio clip, that represents the resource that has the thumbnail property. A resource may have multiple thumbnail resources that have the same or different type and format .

The value must be an array of JSON objects, each of which must have the id and type properties, and should have the format property. Images and videos should have the width and height properties, and time-based media should have the duration property. It is recommended that a IIIF Image API service be available for images to enable manipulations such as resizing.

  • A Collection should have the thumbnail property with at least one item. Clients should render thumbnail on a Collection.
  • A Manifest should have the thumbnail property with at least one item. Clients should render thumbnail on a Manifest.
  • A Canvas may have the thumbnail property with at least one item. A Canvas should have the thumbnail property if there are multiple resources that make up the view. Clients should render thumbnail on a Canvas.
  • A content resource may have the thumbnail property with at least one item. Content resources should have the thumbnail property with at least one item if it is an option in a Choice of resources. Clients should render thumbnail on a content resource.
  • Other types of resource may have the thumbnail property with at least one item. Clients may render thumbnail on other types of resource.

A date that clients may use for navigation purposes when presenting the resource to the user in a date-based user interface, such as a calendar or timeline. More descriptive date ranges, intended for display directly to the user, should be included in the metadata property for human consumption. If the resource contains Canvases that have the duration property, the datetime given corresponds to the navigation datetime of the start of the resource. For example, a Range that includes a Canvas that represents a set of video content recording a historical event, the navDate is the datetime of the first moment of the recorded event.

The value must be an XSD dateTime literal . The value must have a timezone, and should be given in UTC with the Z timezone indicator, but may instead be given as an offset of the form +hh:mm .

  • A Collection may have the navDate property. Clients may render navDate on a Collection.
  • A Manifest may have the navDate property. Clients may render navDate on a Manifest.
  • A Range may have the navDate property. Clients may render navDate on a Range.
  • A Canvas may have the navDate property. Clients may render navDate on a Canvas.
  • Other types of resource must not have the navDate property. Clients should ignore navDate on other types of resource.

placeholderCanvas

A single Canvas that provides additional content for use before the main content of the resource that has the placeholderCanvas property is rendered, or as an advertisement or stand-in for that content. Examples include images, text and sound standing in for video content before the user initiates playback; or a film poster to attract user attention. The content provided by placeholderCanvas differs from a thumbnail: a client might use thumbnail to summarize and navigate multiple resources, then show content from placeholderCanvas as part of the initial presentation of a single resource. A placeholder Canvas is likely to have different dimensions to those of the Canvas(es) of the resource that has the placeholderCanvas property.

Clients may display the content of a linked placeholder Canvas when presenting the resource. When more than one such Canvas is available, for example if placeholderCanvas is provided for the currently selected Range and the current Manifest, the client should pick the one most specific to the content. Publishers should not assume that the placeholder Canvas will be processed by all clients. Clients should take care to avoid conflicts between time-based media in the rendered placeholder Canvas and the content of the resource that has the placeholderCanvas property.

The value must be a JSON object with the id and type properties, and may have other properties of Canvases. The value of type must be the string Canvas . The object must not have the placeholderCanvas property, nor the accompanyingCanvas property.

  • A Collection may have the placeholderCanvas property. Clients may render placeholderCanvas on a Collection.
  • A Manifest may have the placeholderCanvas property. Clients may render placeholderCanvas on a Manifest.
  • A Canvas may have the placeholderCanvas property. Clients may render placeholderCanvas on a Canvas.
  • A Range may have the placeholderCanvas property. Clients may render placeholderCanvas on a Range.
  • Other types of resource must not have the placeholderCanvas property. Clients should ignore placeholderCanvas on other types of resource.

accompanyingCanvas

A single Canvas that provides additional content for use while rendering the resource that has the accompanyingCanvas property. Examples include an image to show while a duration-only Canvas is playing audio; or background audio to play while a user is navigating an image-only Manifest.

Clients may display the content of an accompanying Canvas when presenting the resource. As with placeholderCanvas above, when more than one accompanying Canvas is available, the client should pick the one most specific to the content. Publishers should not assume that the accompanying Canvas will be processed by all clients. Clients should take care to avoid conflicts between time-based media in the accompanying Canvas and the content of the resource that has the accompanyingCanvas property.

  • A Collection may have the accompanyingCanvas property. Clients may render accompanyingCanvas on a Collection.
  • A Manifest may have the accompanyingCanvas property. Clients may render accompanyingCanvas on a Manifest.
  • A Canvas may have the accompanyingCanvas property. Clients may render accompanyingCanvas on a Canvas.
  • A Range may have the accompanyingCanvas property. Clients may render accompanyingCanvas on a Range.
  • Other types of resource must not have the accompanyingCanvas property. Clients should ignore accompanyingCanvas on other types of resource.

3.2. Technical Properties

These properties describe technical features of the resources, and are typically processed by the client to understand how to render the resource.

The URI that identifies the resource. If the resource is only available embedded within another resource (see the terminology section for an explanation of “embedded”), such as a Range within a Manifest, then the URI may be the URI of the embedding resource with a unique fragment on the end. This is not true for Canvases, which must have their own URI without a fragment.

The value must be a string, and the value must be an HTTP (S) URI for resources defined in this specification. If the resource is retrievable via HTTP (S), then the URI must be the URI at which it is published. External resources, such as profiles, may have non- HTTP (S) URIs defined by other communities.

The existence of an HTTP (S) URI in the id property does not mean that the URI will always be dereferencable. If the resource with the id property is embedded , it may also be dereferenceable. If the resource is referenced (again, see the terminology section for an explanation of “referenced”), it must be dereferenceable. The definitions of the Resources give further guidance.

  • All resource types must have the id property. Clients may render id on any resource type, and should render id on Collections, Manifests and Canvases.

The type or class of the resource. For classes defined for this specification, the value of type will be described in the sections below describing each individual class.

For content resources, the value of type is drawn from other specifications. Recommendations for common content types such as image, text or audio are given in the table below.

The JSON objects that appear in the value of the service property will have many different classes, and can be used to distinguish the sort of service, with specific properties defined in a registered context document .

The value must be a string.

  • All resource types must have the type property. Clients must process, and may render, type on any resource type.
Class Description
Data not intended to be rendered to humans directly
Two dimensional visual resources primarily intended to be seen, such as might be rendered with an <img> tag
A three (or more) dimensional model intended to be interacted with by humans
Auditory resources primarily intended to be heard, such as might be rendered with an <audio> tag
Resources primarily intended to be read
Moving images, with or without accompanying audio, such as might be rendered with a <video> tag

The specific media type (often called a MIME type) for a content resource, for example image/jpeg . This is important for distinguishing different formats of the same overall type of resource, such as distinguishing text in XML from plain text.

Note that this is different to the formats property in the Image API , which gives the extension to use within that API . It would be inappropriate to use in this case, as format can be used with any content resource, not just images.

The value must be a string, and it should be the value of the Content-Type header returned when the resource is dereferenced.

  • A content resource should have the format property. Clients may render the format of any content resource.
  • Other types of resource must not have the format property. Clients should ignore format on other types of resource.

The language or languages used in the content of this external resource. This property is already available from the Web Annotation model for content resources that are the body or target of an Annotation, however it may also be used for resources referenced from homepage , rendering , and partOf .

The value must be an array of strings. Each item in the array must be a valid language code, as described in the languages section .

  • An external resource should have the language property with at least one item. Clients should process the language of external resources.
  • Other types of resource must not have the language property. Clients should ignore language on other types of resource.

A schema or named set of functionality available from the resource. The profile can further clarify the type and/or format of an external resource or service, allowing clients to customize their handling of the resource that has the profile property.

The value must be a string, either taken from the profiles registry or a URI.

  • Resources referenced by the seeAlso or service properties should have the profile property. Clients should process the profile of a service or external resource.
  • Other types of resource may have the profile property. Clients may process the profile of other types of resource.

The height of the Canvas or external content resource. For content resources, the value is in pixels. For Canvases, the value does not have a unit. In combination with the width, it conveys an aspect ratio for the space in which content resources are located.

The value must be a positive integer.

  • A Canvas may have the height property. If it has a height , it must also have a width . Clients must process height on a Canvas.
  • Content resources should have the height property, with the value given in pixels, if appropriate to the resource type. Clients should process height on content resources.
  • Other types of resource must not have the height property. Clients should ignore height on other types of resource.

The width of the Canvas or external content resource. For content resources, the value is in pixels. For Canvases, the value does not have a unit. In combination with the height, it conveys an aspect ratio for the space in which content resources are located.

  • A Canvas may have the width property. If it has a width , it must also have a height . Clients must process width on a Canvas.
  • Content resources should have the width property, with the value given in pixels, if appropriate to the resource type. Clients should process width on content resources.
  • Other types of resource must not have the width property. Clients should ignore width on other types of resource.

The duration of the Canvas or external content resource, given in seconds.

The value must be a positive floating point number.

  • A Canvas may have the duration property. Clients must process duration on a Canvas.
  • Content resources should have the duration property, if appropriate to the resource type. Clients should process duration on content resources.
  • Other types of resource must not have a duration . Clients should ignore duration on other types of resource.

viewingDirection

The direction in which a set of Canvases should be displayed to the user. This specification defines four direction values in the table below. Others may be defined externally as an extension .

  • A Collection may have the viewingDirection property. Clients should process viewingDirection on a Collection.
  • A Manifest may have the viewingDirection property. Clients should process viewingDirection on a Manifest.
  • A Range may have the viewingDirection property. Clients may process viewingDirection on a Range.
  • Other types of resource must not have the viewingDirection property. Clients should ignore viewingDirection on other types of resource.
Value Description
The object is displayed from left to right. The default if not specified.
The object is displayed from right to left.
The object is displayed from the top to the bottom.
The object is displayed from the bottom to the top.

A set of user experience features that the publisher of the content would prefer the client to use when presenting the resource. This specification defines the values in the table below. Others may be defined externally as an extension .

In order to determine the behaviors that are governing a particular resource, there are four inheritance rules from resources that reference the current resource:

  • Collections inherit behaviors from their referencing Collection.
  • Manifests DO NOT inherit behaviors from any referencing Collections.
  • Canvases inherit behaviors from their referencing Manifest, but DO NOT inherit behaviors from any referencing Ranges, as there might be several with different behaviors.
  • Ranges inherit behaviors from any referencing Range and referencing Manifest.

Clients should interpret behaviors on a Range only when that Range is selected or is in some other way the context for the user’s current interaction with the resources. A Range with the behavior value continuous , in a Manifest with the behavior value paged , would mean that the Manifest’s Canvases should be rendered in a paged fashion, unless the range is selected to be viewed, and its included Canvases would be rendered in that context only as being virtually stitched together. This might occur, for example, when a physical scroll is cut into pages and bound into a codex with other pages, and the publisher would like to provide the user the experience of the scroll in its original form.

The descriptions of the behavior values have a set of which other values they are disjoint with, meaning that the same resource must not have both of two or more from that set. In order to determine which is in effect, the client should follow the inheritance rules above, taking the value from the closest resource. The user interface effects of the possible permutations of non-disjoint behavior values are client dependent, and implementers are advised to look for relevant recipes in the IIIF cookbook .

Future Clarification Anticipated Further clarifications about the implications of interactions between behavior values should be expected in subsequent minor releases.

The value must be an array of strings.

  • Any resource type may have the behavior property with at least one item. Clients should process behavior on any resource type.
Value Description
 
Valid on Collections, Manifests, Canvases, and Ranges that include or are Canvases with at least the dimension. When the client reaches the end of a Canvas, or segment thereof as specified in a Range, with a duration dimension that has this behavior, it immediately proceed to the next Canvas or segment and render it. If there is no subsequent Canvas in the current context, then this behavior should be ignored. When applied to a Collection, the client should treat the first Canvas of the next Manifest as following the last Canvas of the previous Manifest, respecting any property specified. Disjoint with .
Valid on Collections, Manifests, Canvases, and Ranges that include or are Canvases with at least the dimension. When the client reaches the end of a Canvas or segment with a duration dimension that has this behavior, it proceed to the next Canvas, if any. This is a default temporal behavior if not specified. Disjoint with .
Valid on Collections and Manifests, that include Canvases that have at least the dimension. When the client reaches the end of the duration of the final Canvas in the resource, and the value is also in effect, then the client return to the first Canvas, or segment of Canvas, in the resource that has the value and start playing again. If the value is not in effect, then the client render a navigation control for the user to manually return to the first Canvas or segment. Disjoint with .
Valid on Collections and Manifests, that include Canvases that have at least the dimension. When the client reaches the end of the duration of the final Canvas in the resource, the client return to the first Canvas, or segment of Canvas. This is a default temporal behavior if not specified. Disjoint with .
 
Valid on Collections, Manifests and Ranges. The Canvases included in resources that have this behavior have no inherent order, and user interfaces avoid implying an order to the user. Disjoint with , , and .
Valid on Collections, Manifests, and Ranges. For Collections that have this behavior, each of the included Manifests are distinct objects in the given order. For Manifests and Ranges, the included Canvases are distinct views, and be presented in a page-turning interface. This is the default layout behavior if not specified. Disjoint with , , and .
Valid on Collections, Manifests and Ranges, which include Canvases that have at least and dimensions. Canvases included in resources that have this behavior are partial views and an appropriate rendering might display all of the Canvases virtually stitched together, such as a long scroll split into sections. This behavior has no implication for audio resources. The of the Manifest will determine the appropriate arrangement of the Canvases. Disjoint with , and .
Valid on Collections, Manifests and Ranges, which include Canvases that have at least and dimensions. Canvases included in resources that have this behavior represent views that be presented in a page-turning interface if one is available. The first canvas is a single view (the first recto) and thus the second canvas likely represents the back of the object in the first canvas. If this is not the case, see the value . Disjoint with , , , and .
Valid only on Canvases, where the Canvas has at least and dimensions. Canvases that have this behavior, in a Manifest that has the value , be displayed by themselves, as they depict both parts of the opening. If all of the Canvases are like this, then page turning is not possible, so simply use instead. Disjoint with and .
Valid only on Canvases, where the Canvas has at least and dimensions. Canvases that have this behavior be presented in a page turning interface, and be skipped over when determining the page order. This behavior be ignored if the current Manifest does not have the value . Disjoint with and .
 
Valid only on Collections. Collections that have this behavior consist of multiple Manifests or Collections which together form part of a logical whole or a contiguous set, such as multi-volume books or a set of journal issues. Clients might render these Collections as a table of contents rather than with thumbnails, or provide viewing interfaces that can easily advance from one member to the next. Disjoint with .
Valid only on Collections. A client present all of the child Manifests to the user at once in a separate viewing area with its own controls. Clients catch attempts to create too many viewing areas. This behavior be interpreted as applying to the members of any child resources. Disjoint with .
 
Valid only on Ranges, where the Range is in the property of a Manifest. Ranges that have this behavior represent different orderings of the Canvases listed in the property of the Manifest, and user interfaces that interact with this order use the order within the selected Range, rather than the default order of . Disjoint with and .
Valid only on Ranges. Ranges that have this behavior be used by the client to present an alternative navigation or overview based on thumbnails, such as regular keyframes along a timeline for a video, or sections of a long scroll. Clients use them to generate a conventional table of contents. Child Ranges of a Range with this behavior have a suitable property. Disjoint with and .
Valid only on Ranges. Ranges that have this behavior be displayed to the user in a navigation hierarchy. This allows for Ranges to be present that capture unnamed regions with no interesting content, such as the set of blank pages at the beginning of a book, or dead air between parts of a performance, that are still part of the Manifest but do not need to be navigated to directly. Disjoint with and .
 
Valid on Annotation Collections, Annotation Pages, Annotations, Specific Resources and Choices. If this behavior is provided, then the client render the resource by default, but allow the user to turn it on and off. This behavior does not inherit, as it is not valid on Collections, Manifests, Ranges or Canvases.

A mode associated with an Annotation that is to be applied to the rendering of any time-based media, or otherwise could be considered to have a duration, used as a body resource of that Annotation. Note that the association of timeMode with the Annotation means that different resources in the body cannot have different values. This specification defines the values specified in the table below. Others may be defined externally as an extension .

  • An Annotation may have the timeMode property. Clients should process timeMode on an Annotation.
Value Description
(default, if not supplied) If the content resource has a longer duration than the duration of portion of the Canvas it is associated with, then at the end of the Canvas’s duration, the playback of the content resource also end. If the content resource has a shorter duration than the duration of the portion of the Canvas it is associated with, then, for video resources, the last frame persist on-screen until the end of the Canvas portion’s duration. For example, a video of 120 seconds annotated to a Canvas with a duration of 100 seconds would play only the first 100 seconds and drop the last 20 second.
Fit the duration of content resource to the duration of the portion of the Canvas it is associated with by scaling. For example, a video of 120 seconds annotated to a Canvas with a duration of 60 seconds would be played at double-speed.
If the content resource is shorter than the of the Canvas, it be repeated to fill the entire duration. Resources longer than the be trimmed as described above. For example, if a 20 second duration audio stream is annotated onto a Canvas with duration 30 seconds, it will be played one and a half times.

3.3. Linking Properties

These properties are references or links between resources, and split into external references where the linked object is outside of the IIIF space, and internal references where the linked object is a IIIF resource. Clients typically create a link to the resource that is able to be activated by the user, or interact directly with the linked resource to improve the user’s experience.

3.3.1. External Links

A web page that is about the object represented by the resource that has the homepage property. The web page is usually published by the organization responsible for the object, and might be generated by a content management system or other cataloging system. The resource must be able to be displayed directly to the user. Resources that are related, but not home pages, must instead be added into the metadata property, with an appropriate label or value to describe the relationship.

The value of this property must be an array of JSON objects, each of which must have the id , type , and label properties, should have a format property, and may have the language property.

  • Any resource type may have the homepage property. Clients should render homepage on a Collection, Manifest or Canvas, and may render homepage on other types of resource.

Model Alignment Please note that this specification has stricter requirements about the JSON pattern used for the homepage property than the Web Annotation Data Model . The IIIF requirements are compatible, but the home page of an Agent found might have only a URI, or might be a JSON object with other properties. See the section on collisions between contexts for more information.

A small image resource that represents the Agent resource it is associated with. The logo must be clearly rendered when the resource is displayed or used, without cropping, rotating or otherwise distorting the image. It is recommended that a IIIF Image API service be available for this image for other manipulations such as resizing.

When more than one logo is present, the client should pick only one of them, based on the information in the logo properties. For example, the client could select a logo of appropriate aspect ratio based on the height and width properties of the available logos. The client may decide on the logo by inspecting properties defined as extensions .

The value of this property must be an array of JSON objects, each of which must have id and type properties, and should have format . The value of type must be “Image”.

  • Agent resources should have the logo property. Clients must render logo on Agent resources.

A resource that is an alternative, non- IIIF representation of the resource that has the rendering property. Such representations typically cannot be painted onto a single Canvas, as they either include too many views, have incompatible dimensions, or are compound resources requiring additional rendering functionality. The rendering resource must be able to be displayed directly to a human user, although the presentation may be outside of the IIIF client. The resource must not have a splash page or other interstitial resource that mediates access to it. If access control is required, then the IIIF Authentication API is recommended . Examples include a rendering of a book as a PDF or EPUB, a slide deck with images of a building, or a 3D model of a statue.

The value must be an array of JSON objects. Each item must have the id , type and label properties, and should have a format property.

  • Any resource type may have the rendering property with at least one item. Clients should render rendering on a Collection, Manifest or Canvas, and may render rendering on other types of resource.

A service that the client might interact with directly and gain additional information or functionality for using the resource that has the service property, such as from an Image to the base URI of an associated IIIF Image API service. The service resource should have additional information associated with it in order to allow the client to determine how to make appropriate use of it. Please see the Service Registry document for the details of currently known service types.

The value must be an array of JSON objects. Each object will have properties depending on the service’s definition, but must have either the id or @id and type or @type properties. Each object should have a profile property.

  • Any resource type may have the service property with at least one item. Clients may process service on any resource type, and should process the IIIF Image API service.

For cross-version consistency, this specification defines the following values for the type or @type property for backwards compatibility with other IIIF APIs. Future versions of these APIs will define their own types. These type values are necessary extensions for compatibility of the older versions.

Value Specification
ImageService1 version 1
ImageService2 version 2
SearchService1 version 1
AutoCompleteService1 version 1
AuthCookieService1 version 1
AuthTokenService1 version 1
AuthLogoutService1 version 1

Implementations should be prepared to recognize the @id and @type property names used by older specifications, as well as id and type . Note that the @context key should not be present within the service , but instead included at the beginning of the document. The example below includes both version 2 and version 3 IIIF Image API services.

A list of one or more service definitions on the top-most resource of the document, that are typically shared by more than one subsequent resource. This allows for these shared services to be collected together in a single place, rather than either having their information duplicated potentially many times throughout the document, or requiring a consuming client to traverse the entire document structure to find the information. The resource that the service applies to must still have the service property, as described above, where the service resources have at least the id and type or @id and @type properties. This allows the client to know that the service applies to that resource. Usage of the services property is at the discretion of the publishing system.

A client encountering a service property where the definition consists only of an id and type should then check the services property on the top-most resource for an expanded definition. If the service is not present in the services list, and the client requires more information in order to use the service, then it should dereference the id (or @id ) of the service in order to retrieve a service description.

The value must be an array of JSON objects. Each object must a service resource, as described above.

  • A Collection may have the services property, if it is the topmost Collection in a response document. Clients should process services on a Collection.
  • A Manifest may have the services property. Clients should process services on a Manifest.

A machine-readable resource such as an XML or RDF description that is related to the current resource that has the seeAlso property. Properties of the resource should be given to help the client select between multiple descriptions (if provided), and to make appropriate use of the document. If the relationship between the resource and the document needs to be more specific, then the document should include that relationship rather than the IIIF resource. Other IIIF resources are also valid targets for seeAlso , for example to link to a Manifest that describes a related object. The URI of the document must identify a single representation of the data in a particular format. For example, if the same data exists in JSON and XML , then separate resources should be added for each representation, with distinct id and format properties.

The value must be an array of JSON objects. Each item must have the id and type properties, and should have the label , format and profile properties.

  • Any resource type may have the seeAlso property with at least one item. Clients may process seeAlso on any resource type.

3.3.2. Internal Links

A containing resource that includes the resource that has the partOf property. When a client encounters the partOf property, it might retrieve the referenced containing resource, if it is not embedded in the current representation, in order to contribute to the processing of the contained resource. For example, the partOf property on a Canvas can be used to reference an external Manifest in order to enable the discovery of further relevant information. Similarly, a Manifest can reference a containing Collection using partOf to aid in navigation.

The value must be an array of JSON objects. Each item must have the id and type properties, and should have the label property.

  • Any resource type may have the partOf property with at least one item Clients may render partOf on any resource type.

A Canvas, or part of a Canvas, which the client should show on initialization for the resource that has the start property. The reference to part of a Canvas is handled in the same way that Ranges reference parts of Canvases. This property allows the client to begin with the first Canvas that contains interesting content rather than requiring the user to manually navigate to find it.

The value must be a JSON object, which must have the id and type properties. The object must be either a Canvas (as in the first example below), or a Specific Resource with a Selector and a source property where the value is a Canvas (as in the second example below).

  • A Manifest may have the start property. Clients should process start on a Manifest.
  • A Range may have the start property. Clients should process start on a Range.
  • Other types of resource must not have the start property. Clients should ignore start on other types of resource.

supplementary

A link from this Range to an Annotation Collection that includes the supplementing Annotations of content resources for the Range. Clients might use this to present additional content to the user from a different Canvas when interacting with the Range, or to jump to the next part of the Range within the same Canvas. For example, the Range might represent a newspaper article that spans non-sequential pages, and then uses the supplementary property to reference an Annotation Collection that consists of the Annotations that record the text, split into Annotation Pages per newspaper page. Alternatively, the Range might represent the parts of a manuscript that have been transcribed or translated, when there are other parts that have yet to be worked on. The Annotation Collection would be the Annotations that transcribe or translate, respectively.

The value must be a JSON object, which must have the id and type properties, and the type must be AnnotationCollection .

  • A Range may have the supplementary property. Clients may process supplementary on a Range.
  • Other types of resource must not have the supplementary property. Clients should ignore supplementary on other types of resource.

3.4. Structural Properties

These properties define the structure of the object being represented in IIIF by allowing the inclusion of child resources within parents, such as a Canvas within a Manifest, or a Manifest within a Collection. The majority of cases use items , however there are two special cases for different sorts of structures.

Much of the functionality of the IIIF Presentation API is simply recording the order in which child resources occur within a parent resource, such as Collections or Manifests within a parent Collection, or Canvases within a Manifest. All of these situations are covered with a single property, items .

The value must be an array of JSON objects. Each item must have the id and type properties. The items will be resources of different types, as described below.

  • A Collection must have the items property. Each item must be either a Collection or a Manifest. Clients must process items on a Collection.
  • A Manifest must have the items property with at least one item. Each item must be a Canvas. Clients must process items on a Manifest.
  • A Canvas should have the items property with at least one item. Each item must be an Annotation Page. Clients must process items on a Canvas.
  • An Annotation Page should have the items property with at least one item. Each item must be an Annotation. Clients must process items on an Annotation Page.
  • A Range must have the items property with at least one item. Each item must be a Range, a Canvas or a Specific Resource where the source is a Canvas. Clients should process items on a Range.
  • Other types of resource must not have the items property. Clients should ignore items on other types of resource.

The structure of an object represented as a Manifest can be described using a hierarchy of Ranges. Ranges can be used to describe the “table of contents” of the object or other structures that the user can interact with beyond the order given by the items property of the Manifest. The hierarchy is built by nesting the child Range resources in the items array of the higher level Range. The top level Ranges of these hierarchies are given in the structures property.

The value must be an array of JSON objects. Each item must have the id and type properties, and the type must be Range .

  • A Manifest may have the structures property. Clients should process structures on a Manifest. The first hierarchy should be presented to the user by default, and further hierarchies should be able to be selected as alternative structures by the user.
  • Other types of resource must not have the structures property. Clients should ignore structures on other types of resource.

annotations

An ordered list of Annotation Pages that contain commentary or other Annotations about this resource, separate from the Annotations that are used to paint content on to a Canvas. The motivation of the Annotations must not be painting , and the target of the Annotations must include this resource or part of it.

The value must be an array of JSON objects. Each item must have at least the id and type properties.

  • A Collection may have the annotations property with at least one item. Clients should process annotations on a Collection.
  • A Manifest may have the annotations property with at least one item. Clients should process annotations on a Manifest,.
  • A Canvas may have the annotations property with at least one item. Clients should process annotations on a Canvas.
  • A Range may have the annotations property with at least one item. Clients should process annotations on a Range.
  • A content resource may have the annotations property with at least one item. Clients should process annotations on a content resource.
  • Other types of resource must not have the annotations property. Clients should ignore annotations on other types of resource.

3.5. Values

Values for motivation.

This specification defines two values for the Web Annotation property of motivation , or purpose when used on a Specific Resource or Textual Body.

While any resource may be the target of an Annotation, this specification defines only motivations for Annotations that target Canvases. These motivations allow clients to determine how the Annotation should be rendered, by distinguishing between Annotations that provide the content of the Canvas, from ones with externally defined motivations which are typically comments about the Canvas.

Additional motivations may be added to the Annotation to further clarify the intent, drawn from extensions or other sources. Clients must ignore motivation values that they do not understand. Other motivation values given in the Web Annotation specification should be used where appropriate, and examples are given in the Presentation API Cookbook .

Value Description
Resources associated with a Canvas by an Annotation that has the value be presented to the user as the representation of the Canvas. The content can be thought of as being the Canvas. The use of this motivation with target resources other than Canvases is undefined. For example, an Annotation that has the value , a body of an Image and the target of the Canvas is an instruction to present that Image as (part of) the visual representation of the Canvas. Similarly, a textual body is to be presented as (part of) the visual representation of the Canvas and not positioned in some other part of the user interface.
Resources associated with a Canvas by an Annotation that has the value be presented to the user as part of the representation of the Canvas, or be presented in a different part of the user interface. The content can be thought of as being the Canvas. The use of this motivation with target resources other than Canvases is undefined. For example, an Annotation that has the value , a body of an Image and the target of part of the Canvas is an instruction to present that Image to the user either in the Canvas’s rendering area or somewhere associated with it, and could be used to present an easier to read representation of a diagram. Similarly, a textual body is to be presented either in the targeted region of the Canvas or otherwise associated with it, and might be OCR, a manual transcription or a translation of handwritten text, or captions for what is being said in a Canvas with audio content.

4. JSON-LD Considerations

This section describes features applicable to all of the Presentation API content. For the most part, these are features of the JSON-LD specification that have particular uses within the API .

4.1. Case Sensitivity

Terms in JSON-LD are case sensitive . The cases of properties and enumerated values in IIIF Presentation API responses must match those used in this specification. For example to specify that a resource is a Manifest, the property must be given as type and not Type or tYpE , and the value must be given as Manifest and not manifest or manIfEsT .

4.2. Resource Representations

Resource descriptions should be embedded within the JSON description of parent resources, and may also be available via separate requests from the HTTP (S) URI given in the resource’s id property. Links to resources must be given as a JSON object with the id and type properties and should have format or profile to give a hint as to what sort of resource is being referred to.

4.3. Properties with Multiple Values

Any of the properties in the API that can have multiple values must always be given as an array of values, even if there is only a single item in that array.

4.4. Language of Property Values

Language may be associated with strings that are intended to be displayed to the user for the label and summary properties, plus the label and value properties of the metadata and requiredStatement objects.

The values of these properties must be JSON objects, with the keys being the BCP 47 language code for the language, or if the language is either not known or the string does not have a language, then the key must be the string none . The associated values must be arrays of strings, where each item is the content in the given language.

Note that BCP 47 allows the script of the text to be included after a hyphen, such as ar-latn , and clients should be aware of this possibility.

In the case where multiple values are supplied, clients must use the following algorithm to determine which values to display to the user.

  • If all of the values are associated with the none key, the client must display all of those values.
  • If any of the values have a language associated with them, the client must display all of the values associated with the language that best matches the language preference.
  • If all of the values have a language associated with them, and none match the language preference, the client must select a language and display all of the values associated with that language.
  • If some of the values have a language associated with them, but none match the language preference, the client must display all of the values that do not have a language associated with them.

Note that this does not apply to embedded textual bodies in Annotations, which use the Web Annotation pattern of value and language as separate properties.

4.5. HTML Markup in Property Values

Minimal HTML markup may be included for processing in the summary property and the value property in the metadata and requiredStatement objects. It must not be used in label or other properties. This is included to allow content publishers to add links and simple formatting instructions to blocks of text. The content must be well-formed XML and therefore must be wrapped in an element such as p or span . There must not be whitespace on either side of the HTML string, and thus the first character in the string must be a ‘<’ character and the last character must be ‘>’, allowing a consuming application to test whether the value is HTML or plain text using these. To avoid a non- HTML string matching this, it is recommended that an additional whitespace character be added to the end of the value in situations where plain text happens to start and end this way.

In order to avoid HTML or script injection attacks, clients must remove:

  • Tags such as script , style , object , form , input and similar.
  • All attributes other than href on the a tag, src and alt on the img tag.
  • All href attributes that start with the strings other than “http:”, “https:”, and “mailto:”.
  • CData sections.
  • XML Comments.
  • Processing instructions.

Clients should allow only a , b , br , i , img , p , small , span , sub and sup tags. Clients may choose to remove any and all tags, therefore it should not be assumed that the formatting will always be rendered. Note that publishers may include arbitrary HTML content for processing using customized or experimental applications, and the requirements for clients assume an untrusted or unknown publisher.

4.6. Linked Data Context and Extensions

The top level resource in the response must have the @context property, and it should appear as the very first key/value pair of the JSON representation. This tells Linked Data processors how to interpret the document. The IIIF Presentation API context, below, must occur once per response in the top-most resource, and thus must not appear within embedded resources. For example, when embedding a Canvas within a Manifest, the Canvas will not have the @context property.

The value of the @context property must be either the URI http://iiif.io/api/presentation/3/context.json or a JSON array with the URI http://iiif.io/api/presentation/3/context.json as the last item. Further contexts, such as those for local or registered extensions , must be added at the beginning of the array.

Any additional properties beyond those defined in this specification or the Web Annotation Data Model should be mapped to RDF predicates using further context documents. These extensions should be added to the top level @context property, and must be added before the above context. The JSON-LD 1.1 functionality of predicate specific context definitions, known as scoped contexts , must be used to minimize cross-extension collisions. Extensions intended for community use should be registered in the extensions registry , but registration is not mandatory.

The JSON representation must not include the @graph key at the top level. This key might be created when serializing directly from RDF data using the JSON-LD 1.0 compaction algorithm. Instead, JSON-LD framing and/or custom code should be used to ensure the structure of the document is as defined by this specification.

4.7. Term Collisions between Contexts

There are some common terms used in more than one JSON-LD context document. Every attempt has been made to minimize these collisions, but some are inevitable. In order to know which specification is in effect at any given point, the class of the resource that has the property is the primary governing factor. Thus properties on Annotation based resources use the context from the Web Annotation Data Model , whereas properties on classes defined by this specification use the IIIF Presentation API context’s definition.

There is one property that is in direct conflict - the label property is defined by both and is available for every resource. The use of label in IIIF follows modern best practices for internationalization by allowing the language to be associated with the value using the language map construction described above . The Web Annotation Data Model uses it only for Annotation Collections , and mandates the format is a string. For this property, the API overrides the definition from the Annotation model to ensure that labels can consistently be represented in multiple languages.

The following properties are defined by both, and the IIIF representation is more specific than the Web Annotation Data Model but are not in conflict, or are never used on the same resource:

  • homepage : In IIIF the home page of a resource is represented as a JSON object, whereas in the Web Annotation Data Model it can also be a string.
  • type : In IIIF the type is singular, whereas in the Web Annotation Data Model there can be more than one type.
  • format : In IIIF the format of a resource is also singular, whereas in the Web Annotation Data Model there can be more than one format.
  • language : In IIIF the language property always takes an array, whereas in the Web Annotation Data Model it can be a single string.
  • start : The start property is used on a Manifest to refer to the start Canvas or part of a Canvas and thus is a JSON object, whereas in the Web Annotation Data Model it is used on a TextPositionSelector to give the start offset into the textual content and is thus an integer.

The rights , partOf , and items properties are defined by both in the same way.

4.8. Keyword Mappings

The JSON-LD keywords @id , @type and @none are mapped to id , type and none by the Presentation API linked data context . Thus in content conforming to this version of the Presentation API , the only JSON key beginning with @ will be @context . However, the content may include data conforming to older specifications or external specifications that use keywords beginning with @ . Clients should expect to encounter both syntaxes.

5. Resource Structure

This section provides detailed description of the resource types used in this specification. Section 2 provides an overview of the resource types and figures illustrating allowed relationships between them, and Appendix A provides summary tables of the property requirements.

5.1. Collection

Collections are used to list the Manifests available for viewing. Collections may include both other Collections and Manifests, in order to form a tree-structured hierarchy. Collections might align with the curated management of cultural heritage resources in sets, also called “collections”, but may have absolutely no such similarity.

The intended usage of Collections is to allow clients to:

  • Load a pre-defined set of Manifests at initialization time.
  • Receive a set of Manifests, such as search results, for rendering.
  • Visualize lists or hierarchies of related Manifests.
  • Provide navigation through a list or hierarchy of available Manifests.

Collections may be embedded inline within other Collections, such as when the Collection is used primarily to subdivide a larger one into more manageable pieces, however Manifests must not be embedded within Collections. An embedded Collection should also have its own URI from which the JSON description is available.

Manifests or Collections may be referenced from more than one Collection. For example, an institution might define four Collections: one for modern works, one for historical works, one for newspapers and one for books. The Manifest for a modern newspaper would then appear in both the modern Collection and the newspaper Collection. Alternatively, the institution may choose to have two separate newspaper Collections, and reference each as a sub-Collection of modern and historical.

Collections with an empty items property are allowed but discouraged. For example, if the user performs a search that matches no Manifests, then the server may return a Collection response with no Manifests.

Collections or Manifests referenced in the items property must have the id , type and label properties. They should have the thumbnail property.

An example Collection document:

Note that while the Collection may reference Collections or Manifests from previous versions of the API , the information included in this document must follow the current version requirements, not the requirements of the target document. This is in contrast to the requirements of service , as there is no way to distinguish a version 2 Manifest from a version 3 Manifest by its type .

5.2. Manifest

The Manifest resource typically represents a single object and any intellectual work or works embodied within that object. In particular it includes descriptive, rights and linking information for the object. The Manifest embeds the Canvases that should be rendered as views of the object and contains sufficient information for the client to initialize itself and begin to display something quickly to the user.

The identifier in id must be able to be dereferenced to retrieve the JSON description of the Manifest, and thus must use the HTTP (S) URI scheme.

The Manifest must have an items property, which is an array of JSON-LD objects. Each object is a Canvas, with requirements as described in the next section. The Manifest may also have a structures property listing one or more Ranges which describe additional structure of the content, such as might be rendered as a table of contents. The Manifest may have an annotations property, which includes Annotation Page resources where the Annotations have the Manifest as their target . These will typically be comment style Annotations, and must not have painting as their motivation .

5.3. Canvas

The Canvas represents an individual page or view and acts as a central point for assembling the different content resources that make up the display. Canvases must be identified by a URI and it must be an HTTP (S) URI. The URI of the canvas must not contain a fragment (a # followed by further characters), as this would make it impossible to refer to a segment of the Canvas’s area using the media fragment syntax of #xywh= for spatial regions, and/or #t= for temporal segments. Canvases may be able to be dereferenced separately from the Manifest via their URIs as well as being embedded .

Every Canvas should have a label to display. If one is not provided, the client should automatically generate one for use based on the Canvas’s position within the items property.

Content resources are associated with the Canvas via Web Annotations. Content that is to be rendered as part of the Canvas must be associated by an Annotation that has the motivation value painting . These Annotations are recorded in the items of one or more Annotation Pages, referred to in the items array of the Canvas. Annotations that do not have the motivation value painting must not be in pages referenced in items , but instead in the annotations property. Referenced, external Annotation Pages must have the id and type properties.

Content that is derived from the Canvas, such as a manual or automatic (OCR) transcription of text in an image or the words spoken in an audio representation, must be associated by an Annotation that has the motivation value supplementing . Annotations may have any other motivation values as well. Thus, content of any type may be associated with the Canvas via an Annotation that has the motivation value painting , meaning the content is part of the Canvas; an Annotation that has the motivation value supplementing , meaning the content is from the Canvas but not necessarily part of it; or an Annotation with another motivation meaning that it is somehow about the Canvas.

A Canvas must have a rectangular aspect ratio (described with the height and width properties) and/or a duration to provide an extent in time. These dimensions allow resources to be associated with specific regions of the Canvas, within the space and/or time extents provided. Content must not be associated with space or time outside of the Canvas’s dimensions, such as at coordinates below 0,0, greater than the height or width, before 0 seconds, or after the duration. Content resources that have dimensions which are not defined for the Canvas must not be associated with that Canvas by an Annotation that has the motivation value painting . For example, it is valid to use an Annotation that has the motivation value painting to associate an Image (which has only height and width) with a Canvas that has all three dimensions, but it is an error to associate a Video resource (which has height, width and duration) with a Canvas that does not have all three dimensions. Such a resource should instead be referenced using the rendering property, or by Annotations that have a motivation value other than painting in the annotations property.

Parts of Canvases may be described using a Specific Resource with a Selector, following the patterns defined in the Web Annotation data model. The use of the FragmentSelector class is recommended by that specification, as it allows for refinement by other Selectors and for consistency with use cases that cannot be represented using a URI fragment directly. Parts of Canvases can be referenced from Ranges, as the body or target of Annotations, or in the start property.

Parts of Canvases may also be identified by appending a fragment to the Canvas’s URI, and these parts are still considered to be Canvases: their type value is the string Canvas . Rectangular spatial parts of Canvases may also be described by appending an xywh= fragment to the end of the Canvas’s URI. Similarly, temporal parts of Canvases may be described by appending a t= fragment to the end of the Canvas’s URI. Spatial and temporal fragments may be combined, using an & character between them, and the temporal dimension should come first. It is an error to select a region using a dimension that is not defined by the Canvas, such as a temporal region of a Canvas that only has height and width dimensions.

Canvases may be treated as content resources for the purposes of annotating on to other Canvases. For example, a Canvas (Canvas A) with a video resource and Annotations representing subtitles or captions may be annotated on to another Canvas (Canvas B). This pattern maintains the correct spatial and temporal alignment of Canvas A’s content relative to Canvas B’s dimensions.

Renderers must scale content into the space represented by the Canvas, and should follow any timeMode value provided for time-based media. If the Canvas represents a view of a physical object, the spatial dimensions of the Canvas should be the same scale as that physical object, and content should represent only the object.

Ranges are used to represent structure within an object beyond the default order of the Canvases in the items property of the Manifest, such as newspaper sections or articles, chapters within a book, or movements within a piece of music. Ranges can include Canvases, parts of Canvases, or other Ranges, creating a tree structure like a table of contents.

The intent of adding a Range to the Manifest is to allow the client to display a linear or hierarchical navigation interface to enable the user to quickly move through the object’s content. Clients should present only Ranges that have the label property and do not have a behavior value no-nav to the user. Clients should not render Canvas labels as part of the navigation, and a Range that wraps the Canvas must be created if this is the desired presentation.

If there is no Range that has the behavior value sequence , and the Manifest does not have the behavior value unordered , then the client should treat the order of the Canvases in the Manifest’s items array as the default order. If there is one Range that has the behavior value sequence , then the client must instead use this Range for the ordering. If there is more than one Range that has the behavior value sequence , for example a second Range to represent an alternative ordering of the pages of a manuscript, the first Range should be used as the default and the others should be able to be selected. Ranges that have the behavior value sequence must be directly within the structures property of the Manifest, and must not be embedded or referenced within other Ranges. These Ranges may have limited hierarchical nesting, but clients are not expected to traverse very deep structures in determining the default order. If this Range includes parts of Canvases, then these parts are the content to render by default and would generate separate entries in a navigation display. This allows for the Canvas to include content outside of the default view, such as a color bar or ruler.

Ranges must have URIs and they should be HTTP (S) URIs. Top level Ranges are embedded or externally referenced within the Manifest in a structures property. These top level Ranges then embed or reference other Ranges, Canvases or parts of Canvases in the items property. Each entry in the items property must be a JSON object, and it must have the id and type properties. If a top level Range needs to be dereferenced by the client, then it must not have the items property, such that clients are able to recognize that it should be retrieved in order to be processed.

All of the Canvases or parts that should be considered as being part of a Range must be included within the Range’s items property, or a descendant Range’s items .

The Canvases and parts of Canvases need not be contiguous or in the same order as in the Manifest’s items property or any other Range. Examples include newspaper articles that are continued in different sections, a chapter that starts half way through a page, or time segments of a single canvas that represent different sections of a piece of music.

Ranges may link to an Annotation Collection that has the content of the Range using the supplementary property. The referenced Annotation Collection will contain Annotations that target areas of Canvases within the Range and link content resources to those Canvases.

5.5. Annotation Page

Association of Images and other content with their respective Canvases is done via Annotations. Traditionally Annotations are used for associating commentary with the resource the Annotation’s text or body is about, the Web Annotation model allows any resource to be associated with any other resource, or parts thereof, and it is reused for both commentary and painting resources on the Canvas. Other resources beyond images might include the full text of the object, musical notations, musical performances, diagram transcriptions, commentary Annotations, tags, video, data and more.

These Annotations are collected together in Annotation Page resources, which are included in the items property from the Canvas. Each Annotation Page can be embedded in its entirety, if the Annotations should be processed as soon as possible when the user navigates to that Canvas, or a reference to an external page. This reference must include id and type , must not include items and may include other properties, such as behavior . All of the Annotations in the Annotation Page should have the Canvas as their target . Clients should process the Annotation Pages and their items in the order given in the Canvas. Publishers may choose to expedite the processing of embedded Annotation Pages by ordering them before external pages, which will need to be dereferenced by the client.

An Annotation Page must have an HTTP (S) URI given in id , and may have any of the other properties defined in this specification or the Web Annotation specification. The Annotations are listed in the items property of the Annotation Page.

Incompatibility Warning The definition of label in the Web Annotation specification does not produce JSON conformant with the structure defined in this specification for languages. Given the absolute requirement for internationalized labels and the strong desire for consistently handling properties, the label property on Annotation model classes does not conform to the string requirement of the Web Annotation Data Model. This issue has been filed with the W3C and will hopefully be addressed in a future version of the standard.

5.6. Annotation

Annotations follow the Web Annotation data model. The description provided here is a summary plus any IIIF specific requirements. The W3C standard is the official documentation.

Annotations must have their own HTTP (S) URIs, conveyed in the id property. The JSON-LD description of the Annotation should be returned if the URI is dereferenced, according to the Web Annotation Protocol .

When Annotations are used to associate content resources with a Canvas, the content resource is linked in the body of the Annotation. The URI of the Canvas must be repeated in the target property of the Annotation, or the source property of a Specific Resource used in the target property.

Note that the Web Annotation data model defines different patterns for the value property, when used within an Annotation. The value of a Textual Body or a Fragment Selector, for example, are strings rather than JSON objects with languages and values. Care must be taken to use the correct string form in these cases.

Additional features of the Web Annotation data model may also be used, such as selecting a segment of the Canvas or content resource, or embedding the comment or transcription within the Annotation. The use of these advanced features sometimes results in situations where the target is not a content resource, but instead a SpecificResource, a Choice, or other non-content object. Implementations should check the type of the resource and not assume that it is always content to be rendered.

The IIIF community has defined additional Selector classes for use with SpecificResources, especially for cases when it is not possible to use the official FragmentSelector. See the additional documentation for details.

5.7. Content Resources

Content resources are external web resources that are referenced from within the Manifest or Collection. This includes images, video, audio, data, web pages or any other format.

As described in the Canvas section, the content associated with a Canvas (and therefore the content of a Manifest) is provided by the body property of Annotations with the painting motivation. Content resources can also be referenced from thumbnail , homepage , logo , rendering , and seeAlso properties.

Content resources must have an id property, with the value being the URI at which the resource can be obtained.

The type of the content resource must be included, and should be taken from the table listed under the definition of type . The format of the resource should be included and, if so, should be the media type that is returned when the resource is dereferenced. The profile of the resource, if it has one, should also be included. Content resources in appropriate formats may also have the language , height , width , and duration properties. Content resources may also have descriptive and linking properties, as defined in section 3 .

If the content resource is an Image, and a IIIF Image service is available for it, then the id property of the content resource may be a complete URI to any particular representation supported by the Image Service, such as https://example.org/image1/full/1000,/0/default.jpg , but must not be just the URI of the IIIF Image service. Its type value must be the string Image . Its media type may be listed in format , and its height and width may be given as integer values for height and width respectively. The Image should have the service referenced from it using the service property.

If there is a need to distinguish between content resources, then the resource should have the label property.

A Canvas may be treated as a content resource for the purposes of annotating it on to other Canvases. In this situation, the Canvas may be embedded within the Annotation, or require dereferencing to obtain its description.

5.8. Annotation Collection

Annotation Collections represent groupings of Annotation Pages that should be managed as a single whole, regardless of which Canvas or resource they target. This allows, for example, all of the Annotations that make up a particular translation of the text of a book to be collected together. A client might then present a user interface that allows all of the Annotations in an Annotation Collection to be displayed or hidden according to the user’s preference.

Annotation Collections must have a URI, and it should be an HTTP (S) URI. They should have a label and may have any of the other descriptive, linking or rights properties.

For Annotation Collections with many Annotations, there will be many pages. The Annotation Collection refers to the first and last page, and then the pages refer to the previous and next pages in the ordered list. Each page is part of the Annotation Collection.

6. HTTP Requests and Responses

This section describes the recommended request and response interactions for the API . The REST and simple HATEOAS approach is followed where an interaction will retrieve a description of the resource, and additional calls may be made by following links obtained from within the description. All of the requests use the HTTP GET method; creation and update of resources is not covered by this specification. It is recommended that implementations also support HTTP HEAD requests.

6.1. URI Recommendations

While any HTTP (S) URI is technically acceptable for any of the resources in the API , there are several best practices for designing the URIs for the resources.

  • The URI should use the HTTPS scheme, not HTTP .
  • The URI should not include query parameters or fragments.
  • Once published, they should be as persistent and unchanging as possible.
  • Special characters must be encoded.

6.2. Requests

Clients are only expected to follow links to Presentation API resources. Unlike IIIF Image API requests, or other parameterized services, the URIs for Presentation API resources cannot be assumed to follow any particular pattern.

6.3. Responses

The format for all responses is JSON , as described above. It is good practice for all resources with an HTTP (S) URI to provide their description when the URI is dereferenced. If a resource is referenced within a response, rather than being embedded , then it must be able to be dereferenced.

If the server receives a request with an Accept header, it should respond following the rules of content negotiation . Note that content types provided in the Accept header of the request may include parameters, for example profile or charset .

If the request does not include an Accept header, the HTTP Content-Type header of the response should have the value application/ld+json ( JSON-LD ) with the profile parameter given as the context document: http://iiif.io/api/presentation/3/context.json .

If the Content-Type header application/ld+json cannot be generated due to server configuration details, then the Content-Type header should instead be application/json (regular JSON ), without a profile parameter.

The HTTP server must follow the CORS requirements to enable browser-based clients to retrieve the descriptions. If the server receives a request with one of the content types above in the Accept header, it should respond with that content type following the rules of content negotiation . Recipes for enabling CORS and conditional Content-Type headers are provided in the Apache HTTP Server Implementation Notes .

Responses should be compressed by the server as there are significant performance gains to be made for very repetitive data structures.

7. Authentication

It may be necessary to restrict access to the descriptions made available via the Presentation API . As the primary means of interaction with the descriptions is by web browsers using XmlHttpRequests across domains, there are some considerations regarding the most appropriate methods for authenticating users and authorizing their access. The approach taken is described in the Authentication specification, and requires requesting a token to add to the requests to identify the user. This token might also be used for other requests defined by other APIs.

It is possible to include Image API service descriptions within the Manifest, and within those it is also possible to include links to the Authentication API ’s services that are needed to interact with the image content. The first time an Authentication API service is included within a Manifest, it must be the complete description. Subsequent references should be just the URI of the service, and clients are expected to look up the details from the full description by matching the URI. Clients must anticipate situations where the Authentication service description in the Manifest is out of date: the source of truth is the Image Information document, or other system that references the Authentication API services.

A. Summary of Property Requirements

Icon Meaning
Required
Recommended
Optional
Not Allowed

Descriptive and Rights Properties

  label metadata summary requiredStatement rights navDate language
Collection
Manifest
Canvas
Annotation
AnnotationPage
Range
AnnotationCollection
Content Resources
  provider thumbnail placeholderCanvas accompanyingCanvas
Collection
Manifest
Canvas
Annotation
AnnotationPage
Range
AnnotationCollection
Content Resources

*A Canvas that is the value of a placeholderCanvas or accompanyingCanvas property may not have either of those properties itself.

Technical Properties

  id type format profile height width duration viewingDirection behavior timeMode
Collection
Manifest
Canvas
Annotation
Annotation Page
Range
Annotation Collection
Content Resources

*If a Canvas has either of height and width , it must have the other, as described in the definitions of those properties.

Linking Properties

  seeAlso service homepage rendering partOf start supplementary services
Collection
Manifest
Canvas
Annotation
Annotation Page
Range
Annotation Collection
Content Resources

Structural Properties

  items structures annotations
Collection
Manifest
Canvas
Annotation
Annotation Page
Range
Annotation Collection
Content Resources

Behavior Values

  Collection Manifest Canvas Range
auto-advance
continuous
facing-pages
individuals
multi-part
no-auto-advance
no-nav
no-repeat
non-paged
hidden *
paged
repeat
sequence
thumbnail-nav
together
unordered

* hidden is allowed on Annotation Collections, Annotation Pages, Annotations, Specific Resources and Choices, as these are the classes that result in rendering content to the user.

B. Example Manifest Response

C. versioning.

Starting with version 2.0, this specification follows Semantic Versioning . See the note Versioning of APIs for details regarding how this is implemented.

D. Acknowledgements

Many thanks to the members of the IIIF community for their continuous engagement, innovative ideas, and feedback.

Many of the changes in this version are due to the work of the IIIF AV Technical Specification Group , chaired by Jason Ronallo (North Carolina State University), Jon Dunn (Indiana University) and Tom Crane (Digirati). The IIIF Community thanks them for their leadership, and the members of the group for their tireless work.

E. Change Log

Date Description
2020-06-03 Version 3.0 (Surfing Raven)
2017-06-09 Version 2.1.1
2016-05-12 Version 2.1 (Hinty McHintface)
2014-09-11 Version 2.0 (Triumphant Giraffe)
2013-08-26 Version 1.0 (unnamed)
2013-06-14 Version 0.9 (unnamed)
  • Google Workspace
  • Español – América Latina
  • Português – Brasil
  • Tiếng Việt
  • Google Slides

This section presents a set of sample applications and "recipe" examples that demonstrate how to translate an intended Google Slides action into an Google Slides API request.

Custom presentation tool for analysis of common software licenses.

The Slides codelab teaches you how to use Google Slides API as a custom presentation tool for an analysis of the most common software licenses.

You'll learn how to query all open source code on GitHub using BigQuery and create a slide deck using Slides API to present your results.

Sample applications

The Markdown to Slides command-line tool lets you generate slide decks from markdown files.

You can use this to explore the Slides API, or fork the repository and modify the code to provide Slides output to your JavaScript application.

The examples listed in this section demonstrate how to express common actions in Slides as Slides API requests.

These examples are presented as HTTP requests to be language neutral. To learn how to implement Slides API request protocols in a specific language using Google API client libraries, see the following guides:

  • Create a slide
  • Add shapes and text
  • Merge data into a presentation
  • Add charts to a slide
  • Edit and style text

Recipes in this section are divided into the following categories:

  • Basic reading —Recipes that show common ways of reading information from a presentation.
  • Basic writing —Recipes that show common ways of writing to a presentation.
  • Element operations —Recipes that show common page element creation and editing tasks.
  • Presentation operations —Recipes that show how to create and manipulate a presentation.
  • Slide operations —Recipes that show how to create, move, and delete slides in a presentation.
  • Table operations —Recipes that show how to create and edit tables within a slide.
  • Transform operations —Recipes that show how to alter the size and positioning of elements within a slide.

There's often more than one way to complete a given task with the Slides API. Use the batch method presentations.batchUpdate wherever possible to bundle multiple update requests into a single method call. This reduces client HTTP overhead, reduces the number of queries, minimizes the number of revisions on the presentation, and applies all the changes atomically.

To further improve performance, use field masks when reading and updating presentations, pages, and page elements.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2024-08-07 UTC.

The HTML Presentation Framework

Created by Hakim El Hattab and contributors

presentation api demo

Hello There

reveal.js enables you to create beautiful interactive slide decks using HTML. This presentation will show you examples of what it can do.

Vertical Slides

Slides can be nested inside of each other.

Use the Space key to navigate through all slides.

Down arrow

Basement Level 1

Nested slides are useful for adding additional detail underneath a high level horizontal slide.

Basement Level 2

That's it, time to go back up.

Up arrow

Not a coder? Not a problem. There's a fully-featured visual editor for authoring these, try it out at https://slides.com .

Pretty Code

Code syntax highlighting courtesy of highlight.js .

Even Prettier Animations

Point of view.

Press ESC to enter the slide overview.

Hold down the alt key ( ctrl in Linux) and click on any element to zoom towards it using zoom.js . Click again to zoom back out.

(NOTE: Use ctrl + click in Linux.)

Auto-Animate

Automatically animate matching elements across slides with Auto-Animate .

Touch Optimized

Presentations look great on touch devices, like mobile phones and tablets. Simply swipe through your slides.

Add the r-fit-text class to auto-size text

Hit the next arrow...

... to step through ...

... a fragmented slide.

Fragment Styles

There's different types of fragments, like:

fade-right, up, down, left

fade-in-then-out

fade-in-then-semi-out

Highlight red blue green

Transition Styles

You can select from different transitions, like: None - Fade - Slide - Convex - Concave - Zoom

Slide Backgrounds

Set data-background="#dddddd" on a slide to change the background color. All CSS color formats are supported.

Image Backgrounds

Tiled backgrounds, video backgrounds, ... and gifs, background transitions.

Different background transitions are available via the backgroundTransition option. This one's called "zoom".

You can override background transitions per-slide.

Iframe Backgrounds

Since reveal.js runs on the web, you can easily embed other web content. Try interacting with the page in the background.

Marvelous List

  • No order here

Fantastic Ordered List

  • One is smaller than...
  • Two is smaller than...

Tabular Tables

ItemValueQuantity
Apples$17
Lemonade$218
Bread$32

Clever Quotes

These guys come in two forms, inline: The nice thing about standards is that there are so many to choose from and block:

“For years there has been a theory that millions of monkeys typing at random on millions of typewriters would reproduce the entire works of Shakespeare. The Internet has proven this theory to be untrue.”

Intergalactic Interconnections

You can link between slides internally, like this .

Speaker View

There's a speaker view . It includes a timer, preview of the upcoming slide as well as your speaker notes.

Press the S key to try it out.

Export to PDF

Presentations can be exported to PDF , here's an example:

Global State

Set data-state="something" on a slide and "something" will be added as a class to the document element when the slide is open. This lets you apply broader style changes, like switching the page background.

State Events

Additionally custom events can be triggered on a per slide basis by binding to the data-state name.

Take a Moment

Press B or . on your keyboard to pause the presentation. This is helpful when you're on stage and want to take distracting slides off the screen.

  • Right-to-left support
  • Extensive JavaScript API
  • Auto-progression
  • Parallax backgrounds
  • Custom keyboard bindings

- Try the online editor - Source code & documentation

Create Stunning Presentations on the Web

reveal.js is an open source HTML presentation framework. It's a tool that enables anyone with a web browser to create fully-featured and beautiful presentations for free.

Presentations made with reveal.js are built on open web technologies. That means anything you can do on the web, you can do in your presentation. Change styles with CSS, include an external web page using an <iframe> or add your own custom behavior using our JavaScript API .

The framework comes with a broad range of features including nested slides , Markdown support , Auto-Animate , PDF export , speaker notes , LaTeX support and syntax highlighted code .

Ready to Get Started?

It only takes a minute to get set up. Learn how to create your first presentation in the installation instructions !

Online Editor

If you want the benefits of reveal.js without having to write HTML or Markdown try https://slides.com . It's a fully-featured visual editor and platform for reveal.js, by the same creator.

Supporting reveal.js

This project was started and is maintained by @hakimel with the help of many contributions from the community . The best way to support the project is to become a paying member of Slides.com —the reveal.js presentation platform that Hakim is building.

presentation api demo

Slides.com — the reveal.js presentation editor.

Become a reveal.js pro in the official video course.

  • Summarize PowerPoints with AI
  • Summarize Word documents with AI
  • Summarize PDF with AI
  • Generate PowerPoints with AI from text
  • Create Presentation with AI
  • Create Presentations with AI from PDF
  • GPT for Presentations
  • Create Presentations from Website with AI
  • Chat with Document Groups
  • Import files from Cloud
  • Request Demo

SlideSpeak API

Integrate our API and automate workflows with ease. Generate presentations and summarize PowerPoints, PDFs, Word documents and more using our API.

Join the waitlist today!

Your work email

Your phone number

What are you planning to build?

SlideSpeak API Demo Command

API Functionality

  • length Length of the summary
  • language Language of the summary

Summarize PowerPoint API Example

  • docId Source Document ID
  • length Length of the presentation
  • language Language of the presentation
  • images[] Images to be used
  • tone Writing style of the presentation
  • template Presentation template to use
  • message Message or question to prompt
  • language Language of the response

Chat with PowerPoint API Example

Simple Integration

Integrate with our modern JSON API with pretty much any programming language out there.

Our pricing is based on requests and very transparent. Need a custom plan, reach out!

Whether you’re building an experimental or an advanced app, our API is super easy to integrate.

Build amazing workflows with our API

Ai generate presentations and summaries..

Summarize and generate PowerPoint files with ChatGPT. Upload, chat, and gain new insights from your presentations. Use SlideSpeak AI to boost your productivity.

  • Help Center
  • Affiliate Program 💵
  • Call us: +1 ‪(512) 777-1759‬
  • Compress PowerPoint
  • Convert PowerPoint to PDF
  • Convert PowerPoint to PSD
  • Convert PowerPoint to Figma
  • Convert PDF to Word
  • Extract Images from PowerPoint
  • Extract Images from Word
  • Extract Images from PDF
  • Terms of Service
  • Refund Policy
  • Skip to main content
  • Skip to search
  • Skip to select language
  • Sign up for free

Presentation

Limited availability.

This feature is not Baseline because it does not work in some of the most widely-used browsers.

  • See full compatibility
  • Report feedback

Experimental: This is an experimental technology Check the Browser compatibility table carefully before using this in production.

Secure context: This feature is available only in secure contexts (HTTPS), in some or all supporting browsers .

The Presentation can be defined as two possible user agents in the context: Controlling user agent and Receiving user agent .

In controlling browsing context, the Presentation interface provides a mechanism to override the browser default behavior of launching presentation to external screen. In receiving browsing context, Presentation interface provides the access to the available presentation connections.

Instance properties

In a controlling user agent , the defaultRequest attribute MUST return the default presentation request if any, null otherwise. In a receiving browsing context , it MUST return null .

In a receiving user agent , the receiver attribute MUST return the PresentationReceiver instance associated with the receiving browsing context and created by the receiving user agent when the receiving browsing context is created.

Instance methods

Specifications.

Specification

Browser compatibility

BCD tables only load in the browser with JavaScript enabled. Enable JavaScript to view data.

Introduction

Welcome to the PresentationGPT API!

What is PresentationGPT?

A simple , powerful and flexible presentation generation engine.

Getting Started

1. create an api key.

See Authentication

2. Start using the REST API's

See REST API's

3. Purchase a subscription

PresentationGPT gives you free access to the API except for the download links of the files. In order to get access to the PPTX, PDF & Google Slides links you will need to pay for an API plan. Navigate to the API keys page in your dashboard (opens in a new tab) to subscribe to an API plan.

Documentation

Newly Launched - AI Presentation Maker

SlideTeam

AI PPT Maker

Powerpoint Templates

Icon Bundle

Kpi Dashboard

Professional

Business Plans

Swot Analysis

Gantt Chart

Business Proposal

Marketing Plan

Project Management

Business Case

Business Model

Cyber Security

Business PPT

Digital Marketing

Digital Transformation

Human Resources

Product Management

Artificial Intelligence

Company Profile

Acknowledgement PPT

PPT Presentation

Reports Brochures

One Page Pitch

Interview PPT

All Categories

category-banner

Api example template

Showcase how the interface between the two applications takes place by using this Api Example Template. Use this communication protocol PPT slide to explain the working of API. Display the framework of API by taking advantage of the computer program PowerPoint graphic. Make a knowledgeable presentation by giving some examples of the application programming interface. Showcase the comparison between web services and API in order to make your presentation effective. Also, showcase the methods involved in API testing such as GET, POST, Delete, and PUT by utilizing this interface PPT layout. There are various high-quality icons present in the slide which will make your presentation informative and reliable. Describe the usage of API by explaining the concept of libraries and framework, operating system, remote API, web API. Mention the interface between client and server by downloading this ready to use Application Programming Interface PowerPoint Presentation.

api_example_template_Slide01

  • Add a user to your subscription for free

You must be logged in to download this presentation.

PowerPoint presentation slides

Presenting Api Example Template slide. You can modify font color, font size, font type in the slide as per your requirements. This slide can be downloaded in any format like JPG, PNG, and PDF without any trouble. This slide is available in both widescreen and standard screen ratios. Its compatibility with Google Slide makes it accessible at once.

Flag blue

People who downloaded this PowerPoint presentation also viewed the following :

  • Diagrams , Technology , Business Slides , Flat Designs , Concepts and Shapes , Software Development
  • Application Programming Interface ,
  • Application Software

Api example template with all 5 slides:

Be identified as a favorite with our Api Example Template. Folks will gun for your advice.

Api example template

Ratings and Reviews

by Evans Mitchell

July 4, 2021

by Smith Gomez

July 3, 2021

Google Reviews

What is a Product Demonstration - Types, Benefits, Tips

Hiba Fathima

Table of Contents

Product demonstrations are the heartbeat of SaaS sales. They bridge the gap between product features and customer needs, turning curiosity into conviction.

Whether you're a startup or an established player, mastering the art of product demos can significantly boost your conversion rates and accelerate growth.

This guide will walk you through everything you need to know about crafting compelling product demonstrations that resonate with your audience and drive results.

What is a Product Demonstration?

Product demonstration fundamentals

A product demonstration is a focused presentation that showcases a product's key features, benefits, and applications. It's a vital tool in sales and marketing, designed to:

  • Highlight the product's value
  • Illustrate its practical uses
  • Address potential customer needs

Product demos can take various forms:

  • Live presentations
  • Interactive product demos
  • Pre-recorded videos

The primary goal is to give potential customers a clear understanding of how the product works and why it's worth their investment. Whether it's software, hardware, or a service, a well-executed demo can significantly influence purchasing decisions.

Modern, effective product demonstrations go beyond mere descriptions. They provide hands-on experiences or visual representations that allow the audience to envision how the product could solve their specific problems or improve their processes.

Benefits of Conducting Product Demonstrations

Preferred content types to make purchase decisions. product demos are at 27%

Let's take a detailed look at the benefits of product demonstrations and why you should focus on creating the perfect product demos:

1. Accelerates sales processes

Product demonstrations serve as a powerful catalyst in the sales process, significantly accelerating the journey from initial interest to final purchase.  

By providing potential customers with a hands-on experience, demos offer a clear and concise understanding of the product's value proposition.

This direct interaction allows customers to quickly grasp how the product addresses their specific needs, leading to faster decision-making. As a result, the sales cycle is streamlined, potentially boosting overall sales figures and improving the efficiency of the entire sales funnel.

2. Proves product effectiveness

The importance of product demos for SaaS companies [lies in] building a strong market. SaaS companies create new markets, addressing a need no one had thought of before. But [first], they have to persuade people to solve an outdated issue with a new approach. With a product demo, you can clearly explain why your innovation boosts the status quo and encourage potential buyers to adopt new behaviors. - Stella Cooper, CEO at PaydayLoansUK.

One of the most compelling aspects of product demonstrations is their ability to prove a product's effectiveness in real time. Unlike marketing materials or sales pitches, demos provide tangible evidence of a product's capabilities and performance. This firsthand experience builds customer confidence in both the product and the brand behind it.

By showcasing how the product solves real-world problems or enhances existing processes, demonstrations move beyond mere claims, offering concrete proof of value. This visual and interactive validation can be particularly powerful in industries where reliability and performance are crucial factors in purchasing decisions.

3. Gathers direct feedback

A product demo also allows you to learn more about your target market. Requesting a product demonstration is a strong buying signal. Therefore, those who take the time to listen to a product demo reflect the characteristics of your ideal customers. Demos enable you to understand their needs and difficulties better. You can then use this learning to aid you in converting more sales in the future. - Sean O’Neal, President at Onclusive.

Product demonstrations also offer an invaluable opportunity for direct customer feedback. During a demo, potential buyers can ask questions, raise concerns, and provide immediate reactions to the product's features and functionality.

This real-time interaction creates a goldmine of insights for product development teams. By identifying common pain points, areas of interest, or potential improvements, companies can refine their offerings to better meet market demands. Moreover, this two-way communication fosters stronger relationships with customers, showing that the company values their input and is committed to meeting their needs.

4. Creates a sense of ownership

A well-executed product demonstration can create a powerful sense of ownership among potential customers. By allowing users to interact with the product in a meaningful way, demos serve as a risk-free trial experience. This hands-on approach helps customers envision how the product would fit into their own workflows or solve their specific challenges.

As customers engage with the product, they become more invested in its potential benefits, which can significantly reduce perceived risks associated with the purchase. This psychological ownership can be a key factor in overcoming objections and nudging customers toward a positive buying decision.

5. Reinforces brand identity

Beyond showcasing the product itself, product demonstrations provide an excellent platform for reinforcing brand identity. The style, language, and overall presentation of a demo can be tailored to reflect the company's unique personality and values. This cohesive brand experience helps create a lasting impression on potential customers, improving brand recall and differentiation in a crowded marketplace.

By aligning the demo with the company's broader mission and ethos, businesses can create a more emotional connection with their audience, moving beyond feature comparisons to establish a deeper brand relationship.

6. Addresses specific customer needs

During the product demo, you will have the opportunity to listen to the customer’s specific requirements and demonstrate how your product will meet these needs and make their jobs easier. Being able to show how your product can assist them can significantly impact their purchasing decision. - Sean O’Neal, President at Onclusive.

One of the key strengths of product demonstrations is their adaptability to specific customer needs. Unlike generic marketing materials, demos can be customized to address the unique requirements of different industries, company sizes, or use cases. This tailored approach ensures that the demonstration remains relevant and engaging for each audience.

In fact, 40% of buyers prefer personalized product demonstrations to make software purchase decisions.  

By highlighting features and benefits that directly align with a customer's pain points, demos can clearly illustrate how the product fits into existing workflows or integrates with current systems. This level of customization not only improves the effectiveness of the presentation but also shows potential customers that the company understands and values their specific needs.

The Impact of Product Demonstrations on Conversion Rates

Product demonstrations play a crucial role in the success of SaaS businesses. A well-crafted demo can significantly boost your sales team's performance and directly impact your bottom line.

Consider this scenario:

An average SaaS company might conduct 6-10 demos per week. Let's say their product has an annual contract value of $25,000. If their demo-to-sale conversion rate is on the lower end at 25%, they could be generating around $2 million in annual sales. However, if they improve their conversion rate to 40%, that same number of demos could result in $3.2 million in annual sales.

This example illustrates the potential impact of effective product demonstrations. By improving your demo strategy, you could potentially increase your revenue by over $1 million annually without increasing the number of demos conducted.

Factors that can influence demo effectiveness include:

  • Tailoring the demo to the specific needs of each prospect
  • Clearly highlighting the product's value proposition
  • Addressing common pain points
  • Providing an engaging and interactive experience

Remember, a great product demonstration isn't just about showcasing features. It's about demonstrating how your solution can solve real problems for your potential customers.

By focusing on creating impactful demos, you can significantly improve your conversion rates, boost your sales, and drive the growth of your SaaS business.

Types of Product Demonstrations in SaaS

Now, let's dive right into the different types of SaaS demos you can create to delight your prospects and convert them effortlessly.

1. Product Demo Videos

Product demo videos are visual tools that showcase a SaaS product's functionality and key features. These videos play a crucial role in customer acquisition by educating potential clients about the product's capabilities and benefits.

Effective product demo videos typically:

  • Run 2-5 minutes long
  • Focus on core functionalities
  • Use high-quality visuals and clear narration
  • Demonstrate problem-solving capabilities
  • Can be customized for different audiences

Common uses include:

  • Website and product pages
  • Landing pages
  • Social media and email marketing
  • Paid advertising

The main goals are to:

  • Capture viewer attention
  • Highlight unique selling points
  • Provide a clear call-to-action (e.g., sign up for a trial, schedule a demo)

By offering a visual explanation of the product, these videos can effectively communicate value, build trust, and move prospects through the sales funnel.

Here's an example of a great product demo video by Linear -

2. Live Product Demos from a Sales Team

Live product demonstrations offer a personalized approach to showcasing SaaS solutions. Conducted by sales representatives, these demos provide direct interaction with potential customers.

Key features:

  • Real-time presentation of product functionality
  • Tailored to specific customer needs and questions
  • Can be delivered in-person or via video conferencing
  • Suitable for both one-on-one sessions and group webinars
  • Highly engaging and interactive
  • Allows for immediate addressing of customer concerns
  • Demonstrates how the product solves specific customer problems
  • Provides opportunity for relationship-building

Live demos are particularly effective for:

  • Complex products requiring detailed explanation
  • High-value deals needing personalized attention
  • Generating qualified leads
  • Moving prospects closer to purchase decisions

By offering a personalized experience, live demos help sales teams showcase product value, build trust, and close deals more effectively. They allow for dynamic presentations that can adapt in real-time to customer interests and questions.

Live product demonstrations offer a personalized approach to showcasing SaaS solutions. Conducted by sales representatives, these demos provide direct interaction with potential customers.

3. Interactive Demo

Interactive product demos offer potential customers an immersive, hands-on experience of the product during the entire buying and enablement journey — including discovery , purchase, and adoption.

Typically guided by pre-programmed steps, an interactive product demonstration guides users through key features or persona-based benefits for your product – helping visually guide viewers in a step-by-step, engaging way. Best of all, viewers don't need to download or have access to your tool (or be a customer) to learn about these features. No paywall, no subscription, no endless discovery calls before getting hands-on with the product.

  • Pre-programmed, step-by-step guidance
  • Highlight key features and persona-based benefits
  • Accessible without product installation or user accounts
  • Customizable for different user journeys
  • Engages users actively in the discovery process
  • Accelerates the buying decision
  • Reduces friction in the sales process
  • Allows prospects to experience the product firsthand

Applications:

  • Pre-sales exploration
  • Self-service product discovery
  • Customer onboarding
  • Feature adoption

Here’s an example of an interactive product demo which guides you through the features of Lemlist .

Interactive demos bridge the gap between marketing materials and the actual product experience. They empower potential customers to explore the product at their own pace, fostering a deeper understanding and connection with the solution.

Which Type of Product Demonstration Should You Use?

Choosing the right product demo depends on your SaaS business model and product characteristics. Several factors influence this decision, with your growth strategy being a key consideration.

Sales-led vs. Product-led Growth

Sales-led vs. Product-led Growth - The difference

For product-led growth companies, the product itself is the main marketing tool. These businesses often prefer interactive in-app demos. This approach allows potential customers to engage directly with the product, experiencing its value firsthand. Interactive demos can effectively showcase the product's features and benefits without requiring a full commitment from the user.

On the other hand, sales-led companies typically benefit more from sales demo videos. These videos are designed to optimize the sales process by presenting the product's key features and benefits in an engaging, visual format. They can be easily shared with potential customers and used to support sales conversations, making them an ideal tool for businesses that rely on a more traditional sales approach.

However, the choice isn't always black and white. Many successful SaaS companies use a mix of demo types to cater to different stages of the customer journey and various customer preferences. The key is to understand your target audience and choose the demo type that best addresses their needs and concerns.

Consider factors like your product's complexity, your target market's preferences, and your sales cycle length when deciding on the most effective demo type. Remember, the goal is to provide potential customers with the information they need in the most engaging and accessible way possible.

High-touch vs. Low-touch Onboarding

Your onboarding approach affects which type of product demo works best for your SaaS business.

Your onboarding approach affects which type of product demo works best for your SaaS business.

Low-touch Onboarding

Low-touch onboarding is all about self-service. It's for customers who like to learn on their own. This approach uses:

  • Automated video demos
  • Interactive product tours
  • Online help guides

These tools let users set up and use the product by themselves. Low-touch works well for simple products or tech-savvy users who don't need much help.

High-touch Onboarding

High-touch onboarding offers personal help. It's used for:

  • Complex products
  • Enterprise customers
  • High-paying clients

This approach often includes:

  • Live demos by sales reps
  • One-on-one training
  • Custom setup plans

High-touch makes sure important customers get all the help they need.

Your choice depends on what you're selling and who's buying. Some businesses use both approaches, offering different levels of help based on customer needs or what they're paying.

The main goal is to help customers start using your product successfully. Pick the approach that best fits your product and customers.

The Complexity of Your Product

B2B SaaS companies rely on their sales model to drive growth, with three main options: self-serve, enterprise, and transactional. The choice depends on Average Revenue Per User (ARPU) and Customer Acquisition Cost (CAC).

It's crucial for sales and marketing teams to align on this model to avoid wasted resources, inconsistent messaging, and missed opportunities. By ensuring everyone understands and operates within the right model for your product and market, you can create a focused and effective growth strategy.

presentation api demo

And...how complex your product is plays a big role in choosing the right demo type.

Complex products take longer to learn and need more guidance. For these, live demos might seem good at first, as you can answer questions directly. But they can take too much time and be hard for clients to remember everything.

A better approach for complex products is:

  • Creating multiple product demo videos.
  • Adding these videos to an online resource center.

This way, users can:

  • Find the exact help they need, when they need it.
  • Watch videos as many times as they want.
  • Learn at their own pace.

Simpler products might do well with:

  • Interactive demos
  • Short overview videos
  • Quick-start guides

The goal is to match your demo style to how much help users need to understand your product.

Remember, the right product demonstration makes it easier for customers to learn and use your product successfully. Furthermore, according to Salesforce, 45% of end users prefer short and easy-to-understand demos.

How to Create an Effective Product Demo?

Crafting a successful product demo requires careful planning and execution. Here's how to approach it:

Match Demo Types to the Buyer's Journey

Stages of buyer's journey

Different stages of the buyer's journey call for different types of demos. Here's a breakdown:

Remember, the goal is to guide potential customers through the sales funnel. Each demo should be designed to move them to the next stage of their decision-making process.

By aligning your demo type with the buyer's journey, you can:

  • Address the right questions at the right time
  • Provide relevant information as needed
  • Increase engagement and interest
  • Improve conversion rates at each stage

Using the Right Tools for Your Product Demo

Once you've decided on the type of demo, it's time to create it. Choosing the right tool is crucial for producing an effective demo. Here are some of our suggestions to help you get started.

For Pre-recorded Demos:

Consider tools like Loom or Screen Studio for simple screen recordings. For more advanced editing, look into software like Camtasia.

For Live Demos:

Popular options include Zoom, Google Meet, or Vimeo for live streaming. Choose based on your needs for interactivity and audience size.

For Interactive Demos:

Supademo is an excellent choice for creating engaging, interactive product tours. It allows you to create guided walkthroughs of your product without coding.

Interactive Demos are Redefining Product Demonstrations As We Know It Across Teams

B2B buying has changed. It's no longer a linear journey where you can get the buyer on a discovery call, book a demo walkthrough, and seal the deal.

Today's buyers demand more than that:

  • They want to play around the product before they make a case to get buy-in;
  • They want to realize the product's value before they pay for it;
  • They want to be confident in their purchase;

To meet this demand for more information up-front, you can no longer depend solely on jargon or static images that lack fidelity or context for the user. While these landing pages yield some basic results, they fall short when it comes to visually demonstrating your product's features or benefits in a captivating way.

And that's where interactive demo software like Supademo can help.

Whether you work in marketing, sales, or customer success, interactive product demos have some amazing benefits to offer. Key highlights include shortening sales cycles, increasing prospect conversions, efficiency gains, and faster product adoption.

While there are hundreds of benefits depending on the use case, here’s a quick visual overview of some of the main benefits:

Benefits of interactive product demonstrations

Get Started with Your First Interactive Product Demonstration

In conclusion, interactive demos help you break down barriers between your product and your buyers and users. By empowering them with the power to discover, adopt, and educate at their own pace, you can build trust, reduce skepticism, and boost engagement.

And, with Supademo, anyone can create beautifully interactive product demos in just a few minutes – for free with no technical expertise required.

Even better, you get more than just recording or creating a demo with Supademo. There are countless features to help trigger and accelerate the Aha! moment for your buyers. So, head over to Supademo to start creating an engaging, interactive demo – it's free!

presentation api demo

Create beautifully interactive product demos in minutes.

Start a 14-day free trial, no credit card required.

Related articles

Introducing Supa Screenshot: beautiful, instant screenshots for free

Introducing Supa Screenshot: beautiful, instant screenshots for free

Step-by-Step Guide: 7 Tips from the CEO on Creating Better Interactive Demos

Step-by-Step Guide: 7 Tips from the CEO on Creating Better Interactive Demos

How to Leverage Synthetic AI Voices for Interactive Product Demos

How to Leverage Synthetic AI Voices for Interactive Product Demos

Get the fastest, easiest interactive demo platform for teams.

Microsoft Power BI Blog

  • Announcements
  • Power BI Embedded

Power BI August 2024 Feature Summary

Headshot of article author Jason Himmelstein

Welcome to the August 2024 update.

Here are a few, select highlights of the many we have for Power BI.  You can now ask Copilot questions against your semantic model. Updated Save and Upload to OneDrive Flow in Power BI and Narrative visual with Copilot is available in SaaS embed. There is much more to explore, please continue to read on!

European Fabric Community Conference

Join us at Europe’s first  Fabric Community Conference , the ultimate  Power BI,   Fabric, SQL & AI  learning event in  Stockholm, Sweden  from  September 24 -27, 2024 .

With 120 sessions, daily keynotes, 10 pre-conference workshops, an expo hall with community lounge, and “ask the expert” area, the conference offers a rich learning experience you don’t want to miss. This is a unique opportunity to meet the Microsoft teams building these products, customers betting their business on them, and partners at the forefront of deployment and adoption.

Register today  using code MSCUST for an  exclusive discount!

Fabric Sticker Challenge Winners Announced!

The Fabric Community Sticker Challenge launched August 1-23 and winners are in! All Fabric Community members were invited to create unique stickers showcasing their enthusiasm and creativity under the following categories: Community Enthusiasm, Inspirational, “Inside Joke” for developers and data, and Super Users. To see winning designs, check out our Community News . Thank you all who participated in this challenge; it was great to see so much involvement!

Fabric Influencers Spotlight

Check out our latest initiative, the  Fabric Influencers Spotlight .   Each month, we’ll be highlighting some of the great blog, videos presentations and other contributions submitted by members of Microsoft MVP & Fabric Super User communities that cover the Fabric Platform, Data Engineering & Data Science in Fabric, Data Warehousing, Power BI, Real-Time Intelligence, Data Integration, Fabric Administration & Governance, Databases and Learning.

Attention Power BI users! 

If you are accessing Power BI on a web browser version older than Chrome 94, Edge 94, Safari 16.4, Firefox 93, or equivalent, you need upgrade your web browser to a newer version by  August 31, 2024 . Using an outdated browser version after this date, may prevent you from accessing features in Power BI.

presentation api demo

  • Version number: v:  2.132.908.0
  • Date published: 8/19/24
  • Ask Copilot questions against your semantic model (preview) 

Visual level format strings (preview)

  • Dynamic per recipient subscriptions (Generally Available) 

Deliver subscriptions to OneDrive and SharePoint (Generally Available)

  • Updated Save and Upload to OneDrive Flow in Power BI 
  • Visuals, shapes and line enhancements 
  • DAX query view in the web 
  • Narrative visual with Copilot available in SaaS embed 

Editor’s pick of the quarter

New visuals in appsource, filter by powerviz, pie of pie by jta, drill down pie pro by zoomcharts, hierarchical bar chart, deneb: declarative visualization in power bi.

  • Paginated Reports: Sharing of reports connecting to Get Data data sources made easy 

Copilot and AI

Ask copilot questions against your semantic model (preview).

We are pleased to announce that you can now ask Copilot for data from your entire semantic model in Desktop ! Just tell Copilot what you’re looking for, and Copilot will query your model to answer your question with a visual.

To use this new capability, you need to have the Preview feature for “ Copilot chat pane in report view” turned on. If you already have done this there is nothing else that you to need to utilize this new capability.  

presentation api demo

To find out more about how this feature works and the types of questions that are supported check out our previous blog post and documentation page .

Visual level format strings are here, providing you with more options to configure formatting. Originally built for visual calculations, the core ability that visual-level format strings provide is the ability to format visual calculations. Since visual calculations are not in the model, you could not format them, unless you were using them in data labels or in specific parts of the new card and new slicer visuals. With visual level format strings, you can!

The visual calculations edit mode showing the DiffPreviousPercent calculation that returns a percentage which is formatted as a percentage using the data format options in the format pane.

Visual level format strings, however, are useful even without using visual calculations.

With the introduction of visual-level format strings, Power BI now has three levels for format strings:

  • Model. You can set a format string for columns and measures in the model. Anywhere you use that column or measure the format string will be applied, unless it’s overridden by a visual or element level format string.
  • Visual. This is what we’re introducing today. You can set format strings on any column, measure or visual calculation that is on your visual, even if they already had a format string. In that case the model level format string will be overridden, and the visual level format string is used.
  • Element. You can set a format string for data labels and for specific elements of the new card and the new slicer visuals. This level will be expanded to include much more in the future. Any format string you set here will override the format string set on the visual and model level.

These levels are hierarchical, with the model level being the lowest level and the element level the highest. A format string defined on a column, measure or visual calculation on a higher-level override what was defined on a lower level.

Since visual calculations are not in the model, they cannot have a format string set on the model level but can on the visual or element level. Measures and columns can have format strings on all three levels:

Level Impacts Available for
ELEMENT Selected element of the selected visual X X
Visual Selected visual X X
Model All visuals, all pages, all reports on the same model X

The image below summarizes this and shows that higher level format strings override lower-level format strings:

A diagram of a model and a element Description automatically generated

Let’s look at an example using a measure.

I have a Profit measure in my model, which is set to a decimal number format. To do this, you might have set the formatting for this measure using the ribbon:

the formatting options in the ribbon allow you formatting for measures and fields.

Alternatively, you could have made the same selections in the properties pane for the measure in the model view or entered the following custom formatting code:

Formatting options in the properties pane showing #,#.## to format the Total measure as a decimal number in the model.

If you put this measure on a visual it now returns a decimal number, as expected:

A table visual showing the Total measure formatted as a decimal number.

However, on a particular visual you want that measure to be formatted as a whole number. You can now do that by setting the format code on the visual level by opening the format pane for that visual and the Data format options found there under General:

You can set a visual level format string by selecting the visual and opening the format pane. There, go General / Properties and then Data Format. Finally, open Format Options and enter the format string.

Now that same measure shows as a whole number, but just on that visual:

A table visual showing the Total measure formatted as a whole number.

On top of that, you might want to use a scientific notation for that measure but only in the data label on a particular visual. No problem, you set the format code on the data label for that measure:

You can set an element level format string by leveraging the settings in the format pane. For example, set the display units for Data label values to Custom and enter a format code.

So now the total shows in scientific notation, but only in the data label and not in other places (such as the tooltip as shown below). Notice how the element level format is used in the data label but the visual or model level format string is still used for the other elements in the same visual.

A bar chart showing the Total measure by class. It also shows that the Total measure was formatted in scientific notation in the data labels, but not in the tooltip (in which it's formatted as a decimal number).

For visual calculations the same principle applies but of course without the model level. For example, if you have a visual calculation that returns a percentage, you can now format it as such using the Data Format options in the General on the visual in the format pane:

The visual calculations edit mode showing the DiffPreviousPercent calculation that returns a percentage which is formatted as a percentage using the data format options in the format pane.

The ability to set visual level format strings makes it much easier to get the exact formatting you need for your visualizations. However, this is only the first iteration of the visual level format strings. We are planning to add the settings you’re used to for the model level format strings to the visual level soon.

Since visual level format strings are introduced as part of the visual calculations preview, you will need to turn on the visual calculations preview to use them. To do that, go to Options and Settings  >  Options  >  Preview features . Select  Visual calculations  and select  OK . Visual calculations and visual level format strings are enabled after Power BI Desktop is restarted.

Please refer to our docs to read more about format strings or visual calculations .

Dynamic per recipient subscriptions (Generally Available)

We are excited to announce the general availability of Dynamic per recipient subscriptions for Power BI and paginated reports. Dynamic per recipient subscriptions is designed to simplify distributing a personalized copy of a report to each recipient of an email subscription. You define which view of the report an individual receives by specifying which filters are applied to their version of the report. The feature is now available in Sov. Clouds as well.

presentation api demo

Connect to data that has recipient email, names or report parameters.

presentation api demo

Then, select and filter data that you want in your subscription. You probably only want to send emails conditionally. To do that, you can filter the data in the “Filter” pane.

presentation api demo

You can select the recipient email addresses and the email subject from the dataset that you connected to by selecting “Get Data”.

presentation api demo

You can then map your data to the subscription.

presentation api demo

Then schedule the subscription and save it.

presentation api demo

The subscriptions will be triggered based on the schedule that you have set up. Personalized reports can be sent to up to a thousand recipients! Learn more about Dynamic per recipient subscriptions for Power BI reports, and paginated reports .

Do you have reports that are too large to be delivered by email? Do you have reports that are eating into your email in just a few weeks, or do you need you to move it to a different location? You can now deliver Power BI and paginated report subscriptions to OneDrive or SharePoint. With this capability, you can schedule and send full report attachments to a OneDrive or SharePoint location. Learn more about how to deliver report subscriptions to OneDrive or SharePoint .

presentation api demo

Updated Save and Upload to OneDrive Flow in Power BI

Beginning the first week of August, desktop users should see a preview switch starting in SU8 to turn on the updated Save and Upload to OneDrive experience in Power BI. To enable this, navigate to the Preview features section of Options in Power BI. Users will then need to select “Saving to OneDrive and SharePoint uploads the file in the background”.

With these updates, we’ve improved the experience of uploading new Power BI files to OneDrive, and easily upload new changes in the background.

Select options, then Preview features, then select Saving to OneDrive and SharePoint uploads the file in the background.

For uploading new files, after navigating to the correct location in the OneDrive file picker and saving, a dialog box appears while the file is being uploaded. The option to cancel the upload is there if needed. This dialog will only show up the first time a new file is uploaded to OneDrive.

presentation api demo

Dialog for saving a new file to OneDrive.

When new changes are saved to a file uploaded to OneDrive, the top of the toolbar indicates that the new changes are also being uploaded to OneDrive.

presentation api demo

Additional changes being uploaded in the background to the existing file.

If you click on the title bar flyout in the toolbar, you can also now access more information about the file. Clicking “View your file in OneDrive” will provide a direct link to where the file is stored in OneDrive.

presentation api demo

Drop down including the link to the file in OneDrive.

We are introducing the data limit capability to help you manage performance issues. This feature allows you to set the maximum data load for a single session per visual displaying only the rows of data in an ascending order by default.

To use this feature: 

  • Go to the ‘Filters on this visual’ menu in the filter pane.

presentation api demo

  • Set your desired data limit value.

presentation api demo

The filter card features include: 

  • Removing, locking, or clearing filters.  
  • Hiding or showing filters.
  • Expanding or collapsing filter cards.
  • Applying filters.
  • Renaming and reordering filters.

Report consumers can see any data limits applied to a visual in the filter visual header, even if the filter pane is hidden.

Visuals, shapes and line enhancements

Over the past few months, we have been fine-tuning the visual elements of your reports, including columns, bars, ribbons, and lines. We have given you the ability to craft these Cartesians with precision. However, we noticed that the legends and tooltips were not quite accurate .  

presentation api demo

With the latest update, the legend and tooltip icons will now automatically and accurately reflect per-series formatting settings, such as border colors, shapes, and line styles. This makes it easier to match series to their visual representations. Additionally, we have added consistency to how per-series formatting is applied to line charts, column/bar charts, scatter charts, and other Cartesian formatting options for common items like error bars and anomalies.  

Check out the Reporting demos here:

DAX query view in the web

Write DAX queries on your published semantic models with DAX query view in the web. DAX query view, already available in Power BI Desktop, is now also available when you are in the workspace.

Look for Write DAX queries on your published semantic model.

  • Right-click on the semantic model and choose Write DAX queries .
  • Click on the semantic model to open the details page, then click Write DAX queries at the top of the page.

presentation api demo

This will launch DAX query view in the web, where you can write DAX queries, use quick queries to have DAX queries written for you on tables, columns, or measures, or use Fabric Copilot to not only write DAX queries but explain DAX queries, functions, or topics. DAX queries work on semantic models in import, DirectQuery, and Direct Lake storage mode.

presentation api demo

Write permission, that is permission to make changes to the semantic model, is currently needed to write DAX queries in the web. And, the workspace setting, User can edit data models in the Power BI service (preview) , needs to be enabled.

DAX query view in the web includes DAX query view’s way to author measures. Define measures with references, edit any of them, and try out changes across multiple measures by running the DAX query, then update the model with all the changes in a single click of a button. DAX query view in web brings this functionality for the first time to semantic models in Direct Lake mode!

presentation api demo

If you do not have write permission, you can still live connect to the semantic model in Power BI Desktop and run DAX queries there.

Try out DAX query view in web today and learn more about how DAX queries can help you in Power BI and Fabric.

  • Deep dive into DAX query view in web
  • DAX queries
  • Work with DAX query view
  • Deep dive into DAX query view and writing DAX queries
  • Write DAX queries with Copilot
  • Deep dive into DAX query view with Copilot
  • Overview of Copilot for Power BI
  • Direct Lake

Check out a Modeling demo here:

Embedded Analytics

Narrative visual with copilot available in saas embed.

We are excited to announce that the Narrative visual with Copilot is available for user owns data scenarios (SaaS) and secure embed. This means when a user embeds a report containing the narrative visual in a solution where users must sign in – they will now be able to the visual refresh with their data. The first step on our Copilot embed journey!

When you embed a Power BI report in an application in the “embed for your organization” scenario, it allows organizations to integrate rich, interactive data visualizations seamlessly into their internal tools and workflows. Now this solution supports the Copilot visual. A sales team might want to embed a Power BI report in their internal CRM application to streamline their workflow. By integrating sales performance dashboards directly into the CRM, team members can easily monitor key metrics like monthly sales targets, pipeline status, and individual performance, without switching between different tools. This integration enables quicker access to actionable insights, helping the team make informed decisions, identify trends, and react swiftly to market changes, all within the secure environment of their organization’s data infrastructure.

Supported Scenarios:

  • Embed a report in a secure portal or website  Power BI.
  • User owns data : A user embeds a report containing the narrative visual in a solution where users must sign in. They need a license to do so. This action is also known as  embed for your organization . It includes when users want to embed visuals in solutions like PowerPoint as well.

Unsupported Scenario:

  • App owns data:  A customer embeds a narrative visual on a website where users visit, and don’t need to sign in. Also known as  embed for your customer’s application .

To get this set up, there are a few steps to follow – so make sure to check out the documentation . Embed a Power BI report with a Copilot narrative visual – Power BI | Microsoft Learn

You will need to Edit your Microsoft Entra app permissions to enable the embedded scenario to work.

Screenshot showing Select add permission.

From here you’ll need to add the MLModel.Execute.All permission.

A screenshot of a computer Description automatically generated

Check out the documentation for additional details.

Check out an Embedded Analytics demo here:

Visualizations

Icon Map Pro hi-chart Reporting Studio Water Cup Performance Flow – xViz Sunburst by Powerviz Zebra BI Tables 7.0 Enlighten Storyteller Inforiver Writeback Matrix Drill Down Pie PRO (Filter) by ZoomCharts (microsoft.com) Spiral Plot By Office Solution Polar Scatter Plot By Office Solution Hanging Rootogram Chart for Power BI Bar Chart Run Time Convertible Scatter Plot Circular Dendrogram Chart for Power BI Barley Trellis Plot By Office Solution Connected Scatter Plot Chart For Power BI Dot Plot Chart by Office Solution Voronoi Diagram By Office Solution Fish Bone Chart for Power BI Icon Array Chart for Power BI

Image Skyline StackedTrends Visual Bubble Diagram Chord Diagram Non-Ribbon Chord Diagram

Powerviz Filter is an advanced Power BI slicer (Free Visual) that applies a page-level filter to the data. It stands out for its user-friendly design and customization flexibility, with developer-friendly wizard.

Key Features:

  • Hierarchy Control : Support multiple hierarchies with expand/collapse and by-level formatting
  • Ragged Hierarchy Support: Hide BLANK category/values, or both, and display child as parent.
  • Keep selected items at Top : enable this to show your selected items at top.
  • Display Mode : Seamlessly switch between pop-up/canvas modes.
  • Default Selection: Select default categories/values that automatically get filtered on refresh.
  • Selection Mode: Single-select, multiple-select, or select-all with only single-selection.
  • Image: Add images alongside the filter. HTML Links/Base-64 URLs Support.
  • Title-Bar Options: Search Bar, Clear Icon, Ranking, Filter, Sorting, Expand/Collapse.
  • Conditional Formatting: Highlight font and row background color based on specific rules.
  • Template: Choose from professionally created light/dark templates, and easily customize them using the Global styling option.

Other features included are Import/Export Themes, Interactivity, Filter Style, and more.

Business Use-Cases:

Sales Analysis, Marketing Performance tracking, Financial Monitoring

🔗 Try Filter Visual for FREE from AppSource

📊 Check out all features of the visual: Demo_file

📃 Step-by-step instructions: Documentation

💡 YouTube Video: Video_Link

📍 Learn more about visuals: https://powerviz.ai/

✅ Follow Powerviz : https://lnkd.in/gN_9Sa6U

presentation api demo

Slice to Spice: Transform your Pie Chart by Clicking! Dive deeper with a click, creating a new pie!

Pie of Pie by JTA – a Data Scientist’s Visualization Tool

Slice, Click, Reveal: Explore Deeper Insights with Our Interactive Pie Chart Visual for Power BI!

A Power BI custom visual that enables the creation of a hierarchical representation within a Pie Chart. With a simple click, you can effortlessly delve into detailed categories, offering a seamless and visually intuitive way to unveil multi-level insights in a single view.

Experience the convenience of interactive data analysis, where each slice of the initial pie chart acts as a gateway to deeper layers of information. Whether you’re dissecting population demographics, dissecting sales performance, or analysing product distribution, Pie of Pie offers a seamless and visually intuitive solution.

  • Interactive hierarchical representation within a Pie Chart: Dive into detailed categories with ease, exploring multi-level insights seamlessly.
  • Effortlessly explore multi-level insights with a single click: Click on a slice to reveal deeper layers of information, enhancing your data analysis experience.
  • Customizable colours, labels, and legend: Tailor the visual to match your branding or personal preferences, ensuring clarity and consistency in your reports.
  • Choose where to display always both pies and just show the second upon click: Optimize your visual presentation by selecting the most suitable display mode for your data storytelling needs.
  • Animate the visual: Bring your data to life with smooth animations, captivating your audience and enhancing engagement with your insights.
  • Personalize the spacing: Fine-tune the spacing between elements to achieve the perfect balance of aesthetics and readability in your visualizations.

Download Pie of Pie by JTA for free: AppSource

Try Pie of Pie by JTA: Demo

Youtube video: Youtube

Learn more about us: JTA The Data Scientists

A diagram of a pie chart Description automatically generated

Everyone knows what a pie chart is – for centuries, it has been the most popular way to visualize data. But what makes Drill Down Pie PRO special is the incredible amount of flexibility it offers to creators. Enjoy a wide range of customization features (colors, fonts, legends, labels, and more), create up to nine levels of drill down hierarchy, and declutter the chart with an interactive ‘Others’ slice that users can expand with just a click.

What’s more, this visual can be more than just a pie chart – it can be an interactive navigation tool for the entire report. When the user selects a slice or drills down, it will cross-filter other visuals on the report, instantly revealing focused insights. Create faster, more intuitive, and more insightful reports with ZoomCharts!

Main Features:

  • On-chart drill down
  • Cross-chart filtering
  • Up to 9 levels of hierarchy
  • Adjustable ‘Others’ slice
  • Color, label, and legend customization
  • Custom tooltip fields
  • Touch support

🌐 Get Drill Down Pie PRO on AppSource

Product Page | Documentation | Follow ZoomCharts on LinkedIn

presentation api demo

Hierarchical bar chart displays hierarchical data (different fields having parent/child relationship) in the form a bar/column chart with +/- signs to view/hide details or child elements.

presentation api demo

A new feature was added to the visual in Jun 24 whereby the users can display CAGR between the 2 values by clicking the bars one after another (after turning on “CAGR” from format pane).

presentation api demo

This visual has the following key features.

1) Expand/ Collapse bars using (+/-) buttons

2) Show variance between bars

3) Show CAGR between bars

4) Drag the bars for custom sorting

5) Click on legends to drill down/up to any level

6) Show targets

Watch a demo of these features in short video below

https://www.youtube.com/watch?v=kOcs5RNY-Zs

Download this visual from APPSOURCE

Download demo file from APPSOURCE

For more information visit https://www.excelnaccess.com/hierarchical-barchart/

or contact [email protected]

Deneb is a free and open-source certified custom visual that allows developers to create their own highly bespoke data visualizations directly inside Power BI using the declarative JSON syntax of the Vega or Vega-Lite languages.

This is like the approaches used for creating R and Python visuals in Power BI, with the following additional benefits:

  • Everything in-visual —no additional dependencies on local libraries or gateways for your end-users when publishing reports.
  • Microsoft certified runtime —any visual you create receives the same benefits of a certified custom visual, meaning your design will work anywhere Power BI works, including Publish to Web, mobile, PowerPoint, and PDF exports.
  • Performance —your designs are rendered directly inside Power BI rather than being delegated to another location, keeping data inside your workbook and typically resulting in faster render times for end-users.
  • Interactivity —You can integrate Power BI’s interactivity features (tooltips, Drillthrough, cross-filtering, and cross-highlighting with some additional setup.

presentation api demo

📢Our latest version brings many of our top requested new features to the development experience, including:

  • Dark mode —toggle between the traditional light theme and dark theme to reduce eye strain.

presentation api demo

  • Commenting —you can now add comments to your JSON for documentation and debugging purposes.

presentation api demo

  • Auto-completion improvements —suggestions will now be recommended based on the details in the Vega and Vega-Lite schemas.
  • Inline language documentation (for Vega-Lite)—the documentation the Vega team makes available for Vega-Lite in its language schema is now available when you hover your mouse over an appropriate location in your JSON. This will help you discover more language features within Deneb itself, and any hyperlinks will navigate you to the correct location on the Vega-Lite documentation site for further reading.
  • Auto unit formatting —a new format type that applies the same logic as Power BI format numbers in K, M, Bn, etc., with less effort than the existing Power BI value formatter.
  • Advanced cross-filtering (for Vega)—new expression functions to help generate cross-filtering of report items based on a filter against the original dataset sent to Deneb before any transformations may have been applied.

We have many other enhancements in this release, and you can find out more about how these can help you and your readers by:

  • Visting the Change Log on Deneb’s website
  • Checking out our YouTube spotlight videos on key new features
  • Downloading Deneb from AppSource
  • Getting inspired by examples from our community or the sample workbook
  • Following Deneb

Paginated Reports: Sharing of reports connecting to Get Data data sources made easy

We announced the ability to create paginated reports from Power BI Report Builder by connecting to over 100 data sources with the Get Data experience. You can learn more about Connect paginated reports to data sources using Power Query (Preview) – Power BI | Microsoft Learn. You no longer need to share the shareable cloud connection. You only need to share the report and ensure that those consuming the report have access to view the report. This update will be rolling out in the coming weeks.

That is all for this month!

We hope that you enjoy the update! If you installed Power BI Desktop from the Microsoft Store,  please leave us a review .

As always, keep voting on  Ideas  to help us determine what to build next. We are looking forward to hearing from you!

  • embedded analytics
  • Microsoft Fabric
  • paginated reports
  • semantic model
  • Chrome for Developers

Gemini Nano language detection API available for early preview

Kenji Baheux

A language detection API is now available for local experimentation to our early preview program (EPP) participants. With this API, you can determine what language is being used on a web page.

The language detection APIs explainer is available as a proposal for the future development of this exploratory API and other APIs, including a translation API.

Language detection is the first step for translation. Browsers often already have language detection capabilities, and this API will allow web developers to access this technology with a JavaScript API.

As with our other APIs, we'll take your feedback to update the way language detection works, to ensure it meets the needs of developers and users. We hope to learn about the detection quality of summarization, feedback on the API design, and the impact of the current implementation in Chrome Canary.

Once you've signed up and been accepted to the EPP, you'll have access to a demo so you can experiment with this API.

Join the early preview program

As of now, the Prompt API, summarization API, and the language detection API are available for prototyping .

Sign up for the early preview program to gain access to the documentation and demos, stay up-to-date with the latest changes, and discover new APIs.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2024-08-27 UTC.

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

README_CN.md

Latest commit, file metadata and controls.

中文  |   English   |   日本語 |   Français |   Español

presentation api demo

🤗 Hugging Face    |   🤖 ModelScope    |    📑 Paper    |   🖥️ Demo WeChat (微信)    |    Discord    |    API    |    Web    |    APP

Qwen2已开,欢迎关注!看这里: QwenLM/Qwen2

Qwen2模型代码和用法相比此前版本有较大不同,因此我们使用新的repo进行维护。此repo ( QwenLM/Qwen ) 已停止主要更新维护。

请勿混用 Qwen 和 Qwen2 代码,两者并不兼容。

Qwen-Chat Qwen-Chat (Int4) Qwen-Chat (Int8) Qwen
1.8B
7B
14B
72B

我们开源了 Qwen (通义千问)系列工作,当前开源模型的参数规模为18亿(1.8B)、70亿(7B)、140亿(14B)和720亿(72B)。本次开源包括基础模型 Qwen ,即 Qwen-1.8B 、 Qwen-7B 、 Qwen-14B 、 Qwen-72B ,以及对话模型 Qwen-Chat ,即 Qwen-1.8B-Chat 、 Qwen-7B-Chat 、 Qwen-14B-Chat 和 Qwen-72B-Chat 。模型链接在表格中,请点击了解详情。同时,我们公开了我们的 技术报告 ,请点击上方论文链接查看。

当前基础模型已经稳定训练了大规模高质量且多样化的数据,覆盖多语言(当前以中文和英文为主),总量高达3万亿token。在相关基准评测中,Qwen系列模型拿出非常有竞争力的表现,显著超出同规模模型并紧追一系列最强的闭源模型。此外,我们利用SFT和RLHF技术实现对齐,从基座模型训练得到对话模型。Qwen-Chat具备聊天、文字创作、摘要、信息抽取、翻译等能力,同时还具备一定的代码生成和简单数学推理的能力。在此基础上,我们针对LLM对接外部系统等方面针对性地做了优化,当前具备较强的工具调用能力,以及最近备受关注的Code Interpreter的能力和扮演Agent的能力。我们将各个大小模型的特点列到了下表。

模型 开源日期 最大上下文长度 System Prompt强化 预训练token数 微调(Q-Lora)最小GPU用量 生成2048个token的最小显存占用(Int4) 工具调用
Qwen-1.8B 23.11.30 32K 2.2T 5.8GB 2.9GB
Qwen-7B 23.08.03 32K 2.4T 11.5GB 8.2GB
Qwen-14B 23.09.25 8K 3.0T 18.7GB 13.0GB
Qwen-72B 23.11.30 32K 3.0T 61.4GB 48.9GB

在这个项目中,你可以了解到以下内容

  • 快速上手Qwen-Chat教程,玩转大模型推理
  • 量化模型相关细节,包括GPTQ和KV cache量化
  • 推理性能数据,包括推理速度和显存占用
  • 微调的教程,帮你实现全参数微调、LoRA以及Q-LoRA
  • 部署教程,以vLLM和FastChat为例
  • 搭建Demo的方法,包括WebUI和CLI Demo
  • 搭建API的方法,我们提供的示例为OpenAI风格的API
  • 更多关于Qwen在工具调用、Code Interpreter、Agent方面的内容

如果遇到问题,请优先考虑查询 FAQ 。如仍未解决,随时提出issue(但建议使用英语或提供翻译,有助于帮助更多用户)。如果想帮助我们提升,欢迎提交Pull Requests!

想和我们一起讨论和聊天的话,赶紧加入我们的微信群和Discord server(入口见文档开头部分)!

  • 2023.11.30 🔥 我们推出 Qwen-72B 和 Qwen-72B-Chat ,它们在 3T tokens上进行训练,并支持 32k 上下文。同时也发布了 Qwen-1.8B 和 Qwen-1.8B-Chat 。我们还增强了 Qwen-72B-Chat 和 Qwen-1.8B-Chat 的系统指令(System Prompt)功能,请参阅 示例文档 。此外,我们还对 昇腾910 以及 海光DCU 实现了推理的支持,详情请查看 ascend-support 及 dcu-support 文件夹。
  • 2023年10月17日 我们推出了Int8量化模型 Qwen-7B-Chat-Int8 和 Qwen-14B-Chat-Int8 。
  • 相比原版Qwen-7B,新版用了更多训练数据(从2.2T增加到2.4T tokens),序列长度从2048扩展至8192。整体中文能力以及代码能力均有所提升。
  • 2023年9月12日 支持Qwen-7B和Qwen-7B-Chat的微调,其中包括全参数微调、LoRA以及Q-LoRA。
  • 2023年8月21日 发布Qwen-7B-Chat的Int4量化模型,Qwen-7B-Chat-Int4。该模型显存占用低,推理速度相比半精度模型显著提升,在基准评测上效果损失较小。
  • 2023年8月3日 在魔搭社区(ModelScope)和Hugging Face同步推出Qwen-7B和Qwen-7B-Chat模型。同时,我们发布了技术备忘录,介绍了相关的训练细节和模型表现。

Qwen系列模型相比同规模模型均实现了效果的显著提升。我们评测的数据集包括MMLU、C-Eval、 GSM8K、 MATH、HumanEval、MBPP、BBH等数据集,考察的能力包括自然语言理解、知识、数学计算和推理、代码生成、逻辑推理等。Qwen-72B在所有任务上均超越了LLaMA2-70B的性能,同时在10项任务中的7项任务中超越GPT-3.5.

presentation api demo

Model MMLU C-Eval GSM8K MATH HumanEval MBPP BBH CMMLU
5-shot 5-shot 8-shot 4-shot 0-shot 3-shot 3-shot 5-shot
LLaMA2-7B 46.8 32.5 16.7 3.3 12.8 20.8 38.2 31.8
LLaMA2-13B 55.0 41.4 29.6 5.0 18.9 30.3 45.6 38.4
LLaMA2-34B 62.6 - 42.2 6.2 22.6 33.0 44.1 -
ChatGLM2-6B 47.9 51.7 32.4 6.5 - - 33.7 -
InternLM-7B 51.0 53.4 31.2 6.3 10.4 14.0 37.0 51.8
InternLM-20B 62.1 58.8 52.6 7.9 25.6 35.6 52.5 59.0
Baichuan2-7B 54.7 56.3 24.6 5.6 18.3 24.2 41.6 57.1
Baichuan2-13B 59.5 59.0 52.8 10.1 17.1 30.2 49.0 62.0
Yi-34B 76.3 81.8 67.9 15.9 26.2 38.2 66.4 82.6
XVERSE-65B 70.8 68.6 60.3 - 26.3 - - -
45.3 56.1 32.3 2.3 15.2 14.2 22.3 52.1
58.2 63.5 51.7 11.6 29.9 31.6 45.0 62.2
66.3 72.1 61.3 24.8 32.3 40.8 53.4 71.0

对于以上所有对比模型,我们列出了其官方汇报结果与 OpenCompass 结果之间的最佳分数。

更多的实验结果和细节请查看我们的技术备忘录。点击 这里 。

  • python 3.8及以上版本
  • pytorch 1.12及以上版本,推荐2.0及以上版本
  • transformers 4.32及以上版本
  • 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)

我们提供简单的示例来说明如何利用🤖 ModelScope和🤗 Transformers快速使用Qwen-7B和Qwen-7B-Chat。

你可以使用我们预构建好的Docker镜像,省去大部分配置环境的操作,详情见 “使用预构建的docker镜像” 一节。

如不使用Docker,请确保你已经配置好环境并安装好相关的代码包。最重要的是,确保你满足上述要求,然后安装相关的依赖库。

如果你的显卡支持fp16或bf16精度,我们还推荐安装 flash-attention ( 当前已支持flash attention 2 )来提高你的运行效率以及降低显存占用。( flash-attention只是可选项,不安装也可正常运行该项目 )

接下来你可以开始使用Transformers或者ModelScope来使用我们的模型。

🤗 Transformers

如希望使用Qwen-chat进行推理,所需要写的只是如下所示的数行代码。 请确保你使用的是最新代码,并指定正确的模型名称和路径,如 Qwen/Qwen-7B-Chat 和 Qwen/Qwen-14B-Chat

运行Qwen同样非常简单。

若在使用上述代码时由于各种原因无法从 HuggingFace 拉取模型和代码,可以先从 ModelScope 下载模型及代码至本地,再从本地加载模型:

🤖 ModelScope

魔搭(ModelScope)是开源的模型即服务共享平台,为泛AI开发者提供灵活、易用、低成本的一站式模型服务产品。使用ModelScope同样非常简单,代码如下所示:

千问支持batch批量推理。在开启flash-attention的状态下,使用batch推理可以约40%的提速。示例代码如下所示:

我们推荐你使用 qwen.cpp 来实现CPU部署和推理。qwen.cpp是Qwen和tiktoken的C++实现。你可以点击链接进入repo了解详情。

当然,直接在CPU上运行模型也是可以的,示例如下:

但是,这样的推理效率大概率会非常低。

如果你遇到显存不足的问题而希望使用多张GPU进行推理,可以使用上述的默认的使用方法读取模型。此前提供的脚本 utils.py 已停止维护。

尽管这个方法很简单,但它的效率相对较低。我们建议使用vLLM和FastChat并请阅读部署章节。

在 酷睿™/至强® 可扩展处理器或 Arc™ GPU 上部署量化模型时,建议使用 OpenVINO™ Toolkit 以充分利用硬件,实现更好的推理性能。您可以安装并运行此 example notebook 。相关问题,您可在 OpenVINO repo 中提交。

阿里云灵积(DashScope)API服务

最简单的使用Qwen模型API服务的方法就是通过DashScope(阿里云灵积API模型服务)。我们提供了简单介绍说明使用方法。同时,我们还提供了自己部署OpenAI格式的API的方法。

DashScope是阿里云提供的大语言模型的API服务,目前支持Qwen。但请注意,目前提供服务的Qwen模型为内部模型,暂无更多具体细节对外透露。模型服务包括 qwen-turbo 、 qwen-plus 和 qwen-max , qwen-turbo 速度更快, qwen-plus 效果更优, qwen-max 是最新发布的千亿级通义千问2.0模型。详情请查看 文档 。

请首先前往 官网 开通DashScope,获得API Key(AK)。建议通过环境变量设置AK:

随后安装相关代码包,点击 此处 查看安装文档。如使用python,则直接通过pip安装:

如安装JAVA SDK,则通过如下命令安装:

最简单的使用方法就是通过messages调用,用法类似OpenAI API。示例如下:

更多用法请查看官方文档了解详情。

我们提供了基于 AutoGPTQ 的量化方案,并开源了Int4和Int8量化模型。量化模型的效果损失很小,但能显著降低显存占用并提升推理速度。

以下我们提供示例说明如何使用Int4量化模型。在开始使用前,请先保证满足要求(如torch 2.0及以上,transformers版本为4.32.0及以上,等等),并安装所需安装包:

如安装 auto-gptq 遇到问题,我们建议您到官方 repo 搜索合适的wheel。

注意:预编译的 auto-gptq 版本对 torch 版本及其CUDA版本要求严格。同时,由于 其近期更新,你可能会遇到 transformers 、 optimum 或 peft 抛出的版本错误。 我们建议使用符合以下要求的最新版本: torch==2.1 auto-gptq>=0.5.1 transformers>=4.35.0 optimum>=1.14.0 peft>=0.6.1 torch>=2.0,<2.1 auto-gptq<0.5.0 transformers<4.35.0 optimum<1.14.0 peft>=0.5.0,<0.6.0

随后即可使用和上述一致的用法调用量化模型:

我们对BF16,Int8和Int4模型在基准评测上做了测试,发现量化模型效果损失较小,结果如下所示:

Quantization MMLU CEval (val) GSM8K Humaneval
Qwen-1.8B-Chat (BF16) 43.3 55.6 33.7 26.2
Qwen-1.8B-Chat (Int8) 43.1 55.8 33.0 27.4
Qwen-1.8B-Chat (Int4) 42.9 52.8 31.2 25.0
Qwen-7B-Chat (BF16) 55.8 59.7 50.3 37.2
Qwen-7B-Chat (Int8) 55.4 59.4 48.3 34.8
Qwen-7B-Chat (Int4) 55.1 59.2 49.7 29.9
Qwen-14B-Chat (BF16) 64.6 69.8 60.1 43.9
Qwen-14B-Chat (Int8) 63.6 68.6 60.0 48.2
Qwen-14B-Chat (Int4) 63.3 69.0 59.8 45.7
Qwen-72B-Chat (BF16) 74.4 80.1 76.4 64.6
Qwen-72B-Chat (Int8) 73.5 80.1 73.5 62.2
Qwen-72B-Chat (Int4) 73.4 80.1 75.3 61.6
注意:由于Hugging Face的内部实现,本功能的支持文件 cache_autogptq_cuda_256.cpp 与 cache_autogptq_cuda_kernel_256.cu 可能没被下载。如需开启使用,请手动从相关位置下载,并放置到相应文件中。

在模型推理时,我们可以将中间结果key以及value的值量化后压缩存储,这样便可以在相同的卡上存储更多的key以及value,增加样本吞吐。

我们在 config.json 里提供了 use_cache_quantization 和 use_cache_kernel 两个参数来控制是否启用KV cache量化,具体使用方法如下:

注意:当前该功能不支持与flash attention同时开启,如果你开了KV cache量化的同时又开了flash attention( use_flash_attn=True , use_cache_quantization=True , use_cache_kernel=True ),程序默认将关闭 use_flash_attn 。

效果方面,我们验证过Int8 KV Cache的使用对模型整体的精度指标基本无损。我们做了针对显存占用的性能测试。评测运行于单张A100-SXM4-80G GPU,模型默认使用BF16格式,默认生成1024个token,其中OOM表示内存不足。

开启了KV cache量化之后,模型在推理的时候可以开启更大的batch size (bs)。

USE KV Cache bs=1 bs=4 bs=16 bs=32 bs=64 bs=100
No 16.3GB 24.1GB 31.7GB 48.7GB oom oom
Yes 15.5GB 17.2GB 22.3GB 30.2GB 48.2GB 72.4GB

开启了KV cache量化之后,模型在推理时可在生成更长的序列(sl,生成的token数)时,节约更多的显存。

USE KV Cache sl=512 sl=1024 sl=2048 sl=4096 sl=8192
no 15.2GB 16.3GB 17.6GB 19.5GB 23.2GB
yes 15GB 15.5GB 15.8GB 16.6GB 17.6GB

开启KV cache量化后,模型在推理时会将原始存进 layer-past 的float格式的key/value转换成int8格式,同时存储量化部分的参数。

  • 将key/value进行量化操作
  • 存入 layer_past 中:

量化格式的 layer-past :

原始格式的 layer-past :

如果需要将 layer-past 中存好的key,value直接取出使用,可以使用反量化操作将Int8格式的key/value转回float格式:

这一部分将介绍模型推理的速度和显存占用的相关数据。下文的性能测算使用 此脚本 完成。

我们测算了BF16、Int8和Int4模型在生成2048个token时的平均推理速度(tokens/s)和显存使用。结果如下所示:

Model Size Quantization Speed (Tokens/s) GPU Memory Usage
1.8B BF16 54.09 4.23GB
Int8 55.56 3.48GB
Int4 71.07 2.91GB
7B BF16 40.93 16.99GB
Int8 37.47 11.20GB
Int4 50.09 8.21GB
14B BF16 32.22 30.15GB
Int8 29.28 18.81GB
Int4 38.72 13.01GB
72B BF16 8.48 144.69GB (2xA100)
Int8 9.05 81.27GB (2xA100)
Int4 11.32 48.86GB
72B + vLLM BF16 17.60 2xA100

评测运行于单张A100-SXM4-80G GPU(除非提到使用2xA100),使用PyTorch 2.0.1、CUDA 11.8和Flash-Attention2。(72B + vLLM 使用 PyTorch 2.1.0和Cuda 11.8.)推理速度是生成2048个token的速度均值。

注意:以上Int4/Int8模型生成速度使用autogptq库给出,当前 AutoModelForCausalLM.from_pretrained 载入的模型生成速度会慢大约20%。我们已经将该问题汇报给HuggingFace团队,若有解决方案将即时更新。

我们还测量了不同上下文长度、生成长度、Flash-Attention版本的推理速度和 GPU 内存使用情况。可以在 Hugging Face 或 ModelScope 上的相应的模型介绍页面找到结果。

我们提供了 finetune.py 这个脚本供用户实现在自己的数据上进行微调的功能,以接入下游任务。此外,我们还提供了shell脚本减少用户的工作量。这个脚本支持 DeepSpeed 和 FSDP 。我们提供的shell脚本使用了DeepSpeed,因此建议您确保已经安装DeepSpeed和Peft(注意:DeepSpeed可能不兼容最新的pydantic版本,请确保 pydantic<2.0 )。你可以使用如下命令安装:

首先,你需要准备你的训练数据。你需要将所有样本放到一个列表中并存入json文件中。每个样本对应一个字典,包含id和conversation,其中后者为一个列表。示例如下所示:

准备好数据后,你可以使用我们提供的shell脚本实现微调。注意,你需要在脚本中指定你的数据的路径。

微调脚本能够帮你实现:

全参数微调在训练过程中更新所有参数。你可以运行这个脚本开始训练:

尤其注意,你需要在脚本中指定正确的模型名称或路径、数据路径、以及模型输出的文件夹路径。在这个脚本中我们使用了DeepSpeed ZeRO 3。如果你想修改这个配置,可以删除掉 --deepspeed 这个输入或者自行根据需求修改DeepSpeed配置json文件。此外,我们支持混合精度训练,因此你可以设置 --bf16 True 或者 --fp16 True 。在使用fp16时,请使用DeepSpeed支持混合精度训练。经验上,如果你的机器支持bf16,我们建议使用bf16,这样可以和我们的预训练和对齐训练保持一致,这也是为什么我们把默认配置设为它的原因。

运行LoRA的方法类似全参数微调。但在开始前,请确保已经安装 peft 代码库。另外,记住要设置正确的模型、数据和输出路径。我们建议你为模型路径使用绝对路径。这是因为LoRA仅存储adapter部分参数,而adapter配置json文件记录了预训练模型的路径,用于读取预训练模型权重。同样,你可以设置bf16或者fp16。

与全参数微调不同,LoRA ( 论文 ) 只更新adapter层的参数而无需更新原有语言模型的参数。这种方法允许用户用更低的显存开销来训练模型,也意味着更小的计算开销。

注意,如果你使用预训练模型进行LoRA微调,而非chat模型,模型的embedding和输出层的参数将被设为可训练的参数。这是因为预训练模型没有学习过ChatML格式中的特殊token,因此需要将这部分参数设为可训练才能让模型学会理解和预测这些token。这也意味着,假如你的训练引入新的特殊token,你需要通过代码中的 modules_to_save 将这些参数设为可训练的参数。此外,这部分训练参数的引入会影响ZeRO 3的使用,因此我们默认推荐使用ZeRO 2。当然,如果你不需要引入这部分训练参数,你可以通过替换DeepSpeed的配置文件来使用ZeRO 3。如果你想节省显存占用,可以考虑使用chat模型进行LoRA微调,显存占用将大幅度降低。下文的显存占用和训练速度的记录将详细介绍这部分细节。

如果你依然遇到显存不足的问题,可以考虑使用Q-LoRA ( 论文 ) 。该方法使用4比特量化模型以及paged attention等技术实现更小的显存开销。

注意:如你使用单卡Q-LoRA,你可能需要安装 mpi4py 。你可以通过 pip 或者 conda 来安装。

运行Q-LoRA你只需运行如下脚本:

我们建议你使用我们提供的Int4量化模型进行训练,即Qwen-7B-Chat-Int4。请 不要使用 非量化模型!与全参数微调以及LoRA不同,Q-LoRA仅支持fp16。注意,由于我们发现torch amp支持的fp16混合精度训练存在问题,因此当前的单卡训练Q-LoRA必须使用DeepSpeed。此外,上述LoRA关于特殊token的问题在Q-LoRA依然存在。并且,Int4模型的参数无法被设为可训练的参数。所幸的是,我们只提供了Chat模型的Int4模型,因此你不用担心这个问题。但是,如果你执意要在Q-LoRA中引入新的特殊token,很抱歉,我们无法保证你能成功训练。

注意:由于Hugging Face的内部实现,模型在保存时,一些非Python文件未保存(例如 *.cpp 与 *.cu ),如需要支持相关功能,请手动复制有关文件。

与全参数微调不同,LoRA和Q-LoRA的训练只需存储adapter部分的参数。假如你需要使用LoRA训练后的模型,你需要使用如下方法。假设你使用Qwen-7B训练模型,你可以用如下代码读取模型:

注意: 如果 peft>=0.8.0 ,加载模型同时会尝试加载tokenizer,但peft内部未相应设置 trust_remote_code=True ,导致 ValueError: Tokenizer class QWenTokenizer does not exist or is not currently imported. 要避过这一问题,你可以降级 peft<0.8.0 或将tokenizer相关文件移到其它文件夹。

如果你觉得这样一步到位的方式让你很不安心或者影响你接入下游应用,你可以选择先合并并存储模型(LoRA支持合并,Q-LoRA不支持),再用常规方式读取你的新模型,示例如下:

new_model_directory 目录将包含合并后的模型参数与相关模型代码。请注意 *.cu 和 *.cpp 文件可能没被保存,请手动复制。另外, merge_and_unload 仅保存模型,并未保存tokenizer,如有需要,请复制相关文件或使用以以下代码保存

注意:分布式训练需要根据你的需求和机器指定正确的分布式训练超参数。此外,你需要根据你的数据、显存情况和训练速度预期,使用 --model_max_length 设定你的数据长度。

这一小节用于量化全参/LoRA微调后的模型。(注意:你不需要量化Q-LoRA模型因为它本身就是量化过的。) 如果你需要量化LoRA微调后的模型,请先根据上方说明去合并你的模型权重。

我们推荐使用 auto_gptq 去量化你的模型。

注意: 当前AutoGPTQ有个bug,可以在该 issue 查看。这里有个 修改PR ,你可以使用该分支从代码进行安装。

首先,准备校准集。你可以重用微调你的数据,或者按照微调相同的方式准备其他数据。

第二步,运行以下命令:

这一步需要使用GPU,根据你的校准集大小和模型大小,可能会消耗数个小时。

接下来, 将原模型中所有 *.py , *.cu , *.cpp 文件和 generation_config.json 文件复制到输出模型目录下。同时,使用官方对应版本的量化模型的 config.json 文件覆盖输出模型目录下的文件 (例如, 如果你微调了 Qwen-7B-Chat 和 --bits 4 , 那么你可以从 Qwen-7B-Chat-Int4 仓库中找到对应的 config.json )。 并且,你需要将 gptq.safetensors 重命名为 model.safetensors 。

最后,像官方量化模型一样测试你的模型。例如:

我们提供的脚本支持多机微调,可以参考 脚本 中的注释,在每个节点上正确设置相应的参数并启动训练脚本。关于多机分布式训练的更多信息,请参考 torchrun 。

注意: DeepSpeed ZeRO 3 对节点间通信速率的要求远大于 ZeRO 2,在多机微调的情况下会大幅降低训练速度。因此,我们不建议在多机微调的情况下使用 DeepSpeed ZeRO 3 配置。

下面记录7B和14B模型在单GPU使用LoRA(LoRA (emb)指的是embedding和输出层参与训练,而LoRA则不优化这部分参数)和QLoRA时处理不同长度输入的显存占用和训练速度的情况。本次评测运行于单张A100-SXM4-80G GPU,使用CUDA 11.8和Pytorch 2.0,并使用了flash attention 2。我们统一使用batch size为1,gradient accumulation为8的训练配置,记录输入长度分别为256、512、1024、2048、4096和8192的显存占用(GB)和训练速度(s/iter)。我们还使用2张A100测了Qwen-7B的全参数微调。受限于显存大小,我们仅测试了256、512和1024token的性能。

对于 Qwen-7B,我们额外测试了多机微调的性能。我们在两台服务器上运行评测,每台服务器包含两张A100-SXM4-80G GPU,其余配置与Qwen-7B的其他评测相同。多机微调的结果在表中以 LoRA (multinode) 标示。

对于 Qwen-72B,我们测试了两种方案:1)使用4个 A100-SXM4-80G GPUs,通过 Lora + DeepSpeed ZeRO 3 微调和2)使用单张A100-SXM4-80G GPU,通过 QLora (int4) 微调。请注意,使用 LoRA (emb) 微调和不带 DeepSpeed ZeRO 3 的 LoRA 微调在4个A100-SXM4-80G GPUs 上都会出现OOM(你可以通过将 --deepspeed finetune/ds_config_zero3.json 参数传给 finetune/finetune_lora_ds.sh 来打开 DeepSpeed ZeRO 3 配置)。

Model SizeMethod#Nodes#GPUs per nodeSequence Length
2565121024204840968192
1.8BLoRA 11 6.7G / 1.0s/it7.4G / 1.0s/it8.4G / 1.1s/it11.0G / 1.7s/it16.2G / 3.3s/it21.8G / 6.8s/it
LoRA (emb) 11 13.7G / 1.0s/it14.0G / 1.0s/it14.0G / 1.1s/it15.1G / 1.8s/it19.7G / 3.4s/it27.7G / 7.0s/it
Q-LoRA 11 5.8G / 1.4s/it6.0G / 1.4s/it6.6G / 1.4s/it7.8G / 2.0s/it10.2G / 3.4s/it15.8G / 6.5s/it
Full-parameter 11 43.5G / 2.1s/it43.5G / 2.2s/it43.5G / 2.2s/it43.5G / 2.3s/it47.1G / 2.8s/it48.3G / 5.6s/it
7B LoRA 11 20.1G / 1.2s/it20.4G / 1.5s/it21.5G / 2.8s/it23.8G / 5.2s/it29.7G / 10.1s/it36.6G / 21.3s/it
LoRA (emb) 11 33.7G / 1.4s/it34.1G / 1.6s/it35.2G / 2.9s/it35.1G / 5.3s/it39.2G / 10.3s/it48.5G / 21.7s/it
Q-LoRA 11 11.5G / 3.0s/it11.5G / 3.0s/it12.3G / 3.5s/it13.9G / 7.0s/it16.9G / 11.6s/it23.5G / 22.3s/it
Full-parameter 12 139.2G / 4.0s/it148.0G / 4.0s/it162.0G / 4.5s/it---
LoRA (multinode) 22 74.7G / 2.09s/it77.6G / 3.16s/it84.9G / 5.17s/it95.1G / 9.25s/it121.1G / 18.1s/it155.5G / 37.4s/it
14B LoRA 11 34.6G / 1.6s/it35.1G / 2.4s/it35.3G / 4.4s/it37.4G / 8.4s/it42.5G / 17.0s/it55.2G / 36.0s/it
LoRA (emb) 11 51.2 / 1.7s/it51.1G / 2.6s/it51.5G / 4.6s/it54.1G / 8.6s/it56.8G / 17.2s/it67.7G / 36.3s/it
Q-LoRA 11 18.7G / 5.3s/it18.4G / 6.3s/it18.9G / 8.2s/it19.9G / 11.8s/it23.0G / 20.1s/it27.9G / 38.3s/it
72B LoRA + Deepspeed Zero3 14 215.4G / 17.6s/it217.7G / 20.5s/it222.6G / 29.4s/it228.8G / 45.7s/it249.0G / 83.4s/it289.2G / 161.5s/it
Q-LoRA 11 61.4G / 27.4s/it61.4G / 31.5s/it62.9G / 41.4s/it64.1G / 59.5s/it68.0G / 97.7s/it75.6G / 179.8s/it

如希望部署及加速推理,我们建议你使用vLLM。

如果你使用 CUDA 12.1和PyTorch 2.1 ,可以直接使用以下命令安装vLLM。

否则请参考vLLM官方的 安装说明 。

vLLM + 类Transformer接口

请下载 接口封装代码 到当前文件夹,并执行以下命令进行多轮对话交互。(注意:该方法当前只支持 model.chat() 接口。)

vLLM + 网页Demo / 类OpenAI API

你可以使用FastChat去搭建一个网页Demo或类OpenAI API服务器。首先,请安装FastChat:

使用vLLM和FastChat运行Qwen之前,首先启动一个controller:

然后启动model worker读取模型。如使用单卡推理,运行如下命令:

然而,如果你希望使用多GPU加速推理或者增大显存,你可以使用vLLM支持的模型并行机制。假设你需要在4张GPU上运行你的模型,命令如下所示:

启动model worker后,你可以启动一个:

  • Web UI Demo

使用OpenAI API前,请阅读我们的API章节配置好环境,然后运行如下命令:

然而,如果你觉得使用vLLM和FastChat比较困难,你也可以尝试以下我们提供的最简单的方式部署Web Demo、CLI Demo和OpenAI API。

我们提供了Web UI的demo供用户使用 (感谢 @wysaid 支持)。在开始前,确保已经安装如下代码库:

随后运行如下命令,并点击生成链接:

我们提供了一个简单的交互式Demo示例,请查看 cli_demo.py 。当前模型已经支持流式输出,用户可通过输入文字的方式和Qwen-7B-Chat交互,模型将流式输出返回结果。运行如下命令:

我们提供了OpenAI API格式的本地API部署方法(感谢@hanpenggit)。在开始之前先安装必要的代码库:

随后即可运行以下命令部署你的本地API:

你也可以修改参数,比如 -c 来修改模型名称或路径, --cpu-only 改为CPU部署等等。如果部署出现问题,更新上述代码库往往可以解决大多数问题。

使用API同样非常简单,示例如下:

该接口也支持函数调用( Function Calling ),但暂时仅限 stream=False 时能生效。用法见 函数调用示例 。

🐳 使用预构建的Docker镜像

为简化部署流程,我们提供了预配置好相应环境的Docker镜像: qwenllm/qwen ,只需安装驱动、下载模型文件即可启动Demo、部署OpenAI API以及进行微调。

  • 根据需要使用的镜像版本,安装相应版本的Nvidia驱动:
  • qwenllm/qwen:cu117 ( 推荐 ): >= 515.48.07
  • qwenllm/qwen:cu114 (不支持flash-attention): >= 470.82.01
  • qwenllm/qwen:cu121 : >= 530.30.02
  • qwenllm/qwen:latest :与 qwenllm/qwen:cu117 相同
  • 安装并配置 docker 和 nvidia-container-toolkit :
  • 下载模型及代码至本地(参考 此处说明 )

下面我们以Qwen-7B-Chat为例。在启动Web Demo或者部署API前,请先参照下方代码完成配置工作:

如下脚本可以帮你部署:

这些命令将自动下载所需镜像以及后台启动Web UI Demo。你可以打开 http://localhost:${PORT} 来使用该Demo。

如果输出如下内容,则说明Demo启动成功:

如果你想查看Demo的状态,你可以使用这个命令来展示输出结果: docker logs qwen 。

你可以使用这个命令 docker rm -f qwen 来停止服务并删除容器。

使用预配置好的Docker镜像进行微调的方法与 上一章 基本一致(我们已经在镜像中安装了相关依赖):

以下是一个单卡LoRA微调的示例:

如需修改为单卡Q-LoRA微调示例,只要修改 docker run 中的bash命令:

🔥 系统指令 (System Prompt)

Qwen-1.8-Chat 和 Qwen-72B-Chat 通义千问在多样且存在多轮复杂交互的系统指令上进行了充分训练,使模型可以跟随多样的系统指令,实现上下文(in-context)中的模型定制化,进一步提升了通义千问的可扩展性。

通过系统指令,Qwen-Chat能够实现 角色扮演 , 语言风格迁移 , 任务设定 ,和 行为设定 等能力。

presentation api demo

更多关于系统指令的介绍信息可以参考 示例文档 .

Qwen-Chat针对工具使用、函数调用能力进行了优化。用户可以开发基于Qwen的Agent、LangChain应用、甚至Code Interpreter。

我们提供了文档说明如何根据ReAct Prompting的原理实现工具调用,请参见 ReAct示例 。基于该原理,我们在 openai_api.py 里提供了函数调用(Function Calling)的支持。 我们在已开源的中文 评测数据集 上测试模型的工具调用能力,并发现Qwen-Chat能够取得稳定的表现:

中文工具调用评测基准(版本 20231206)
ModelTool Selection (Acc.↑)Tool Input (Rouge-L↑)False Positive Error↓
GPT-498.0%0.95323.9%
GPT-3.574.5%0.80780.6%
Qwen-1_8B-Chat85.0%0.83927.6%
Qwen-7B-Chat95.5%0.90011.6%
Qwen-14B-Chat96.9%0.9175.6%
Qwen-72B-Chat98.2%0.9271.1%

为了考察Qwen使用Python Code Interpreter完成数学解题、数据可视化、及文件处理与爬虫等任务的能力,我们专门建设并开源了一个评测这方面能力的 评测基准 。 我们发现Qwen在生成代码的可执行率、结果正确性上均表现较好:

Code Interpreter Benchmark (Version 20231206)
Model 代码执行结果正确性 (%) 生成代码的可执行率 (%)
Math↑Visualization-Hard↑Visualization-Easy↑General↑
GPT-4 82.8 66.7 60.8 82.8
GPT-3.5 47.3 33.3 55.7 74.1
LLaMA2-13B-Chat 8.3 1.2 15.2 48.3
CodeLLaMA-13B-Instruct 28.2 15.5 21.5 74.1
InternLM-20B-Chat 34.6 10.7 25.1 65.5
ChatGLM3-6B 54.2 4.8 15.2 67.1
Qwen-1.8B-Chat 25.6 21.4 22.8 65.5
Qwen-7B-Chat 41.9 23.8 38.0 67.2
Qwen-14B-Chat 58.4 31.0 45.6 65.5
Qwen-72B-Chat 72.7 41.7 43.0 82.8

presentation api demo

我们引入了NTK插值、窗口注意力、LogN注意力缩放等技术来提升模型的上下文长度并突破训练序列长度的限制,原生长度为2K的Qwen-14B可以扩展到8K的序列长度,而原生长度8K的Qwen-1.8B/7B能够在32K长序列的设置下取得不错的表现。

对于Qwen-72B,我们基于RoPE采用更大的旋转Base来适应更长的上下文。Qwen-72B支持32K的上下文长度。

通过arXiv数据集上的语言模型实验,发现 Qwen 在长上下文场景下可以达到出色的性能。结果如下:

ModelSequence Length
10242048409681921638432768
Qwen-7B (original)4.233.7839.35469.812645.09-
+ dynamic_ntk4.233.783.593.665.71-
Qwen-1.8B 17.42433.85
+ dynamic_ntk + logn + window_attn
Qwen-7B 7.27181.49
+ dynamic_ntk 3.33
+ dynamic_ntk + logn + window_attn
Qwen-14B 22.79334.653168.35-
+ dynamic_ntk + logn + window_attn 3.42-
Qwen-72B -

进一步,我们为了验证Qwen-72B-Chat在长文本任务上的能力,在 L-Eval 客观题上进行了测试,评分结果如下:

Model Input Length Average Coursera GSM QuALITY TOEFL CodeU SFcition
ChatGPT-3.5-16k 16K 60.73 61.38 78.43 64.84
32K 58.13 76.00 6.66

我们进一步进行了“大海捞针”实验(想法来自于 @Greg Kamradt ),测试模型在不同长度的输入下,是否能检索到文章不同位置的信息,结果如下:

presentation api demo

以上结果说明,Qwen-72B-Chat可以能准确检索到32K以内的输入长度中放在各种位置的信息,证明了其具有优秀的长文本处理能力。

注:作为术语的“tokenizer”在中文中尚无共识的概念对应,本文档采用英文表达以利说明。

基于tiktoken的tokenizer有别于其他分词器,比如sentencepiece tokenizer。尤其在微调阶段,需要特别注意特殊token的使用。关于tokenizer的更多信息,以及微调时涉及的相关使用,请参阅 文档 。

我们提供了评测脚本以供复现我们的实验结果。注意,由于内部代码和开源代码存在少许差异,评测结果可能与汇报结果存在细微的结果不一致。请阅读 eval/EVALUATION.md 了解更多信息。

如遇到问题,敬请查阅 FAQ 以及issue区,如仍无法解决再提交issue。

如果你觉得我们的工作对你有帮助,欢迎引用!

https://github.com/QwenLM/Qwen 中的源代码采用 Apache 2.0协议 授权,您可在该仓库根目录找到协议全文。

研究人员与开发者可使用Qwen和Qwen-Chat或进行二次开发。对于商业使用,请查看模型各自的LICENSE。

Qwen-72B、Qwen-14B和Qwen-7B采用 Tongyi Qianwen LICENSE AGREEMENT 授权,您可在相应模型的HuggingFace或ModelScope仓库找到协议原文。如需商用,您只需遵循使用协议进行商用即可,我们欢迎您填写问卷( 72B 、 14B 、 7B )。

Qwen-1.8B采用 Tongyi Qianwen RESEARCH LICENSE AGREEMENT 授权,您可在相应模型的HuggingFace或ModelScope仓库找到协议原文。如需商用,请联系我们。

如果你想给我们的研发团队和产品团队留言,欢迎加入我们的微信群和Discord server。当然也可以通过邮件( [email protected] )联系我们。

IMAGES

  1. What Makes an API Demo Unforgettable?

    presentation api demo

  2. GitHub

    presentation api demo

  3. Api management powerpoint presentation slides

    presentation api demo

  4. Api Example Template

    presentation api demo

  5. Api Core Interface And System

    presentation api demo

  6. API Development PowerPoint Presentation and Google Slides

    presentation api demo

VIDEO

  1. 02 03 Demo Create an API endpoint

  2. API Demo (5007CEM : Web Development)

  3. Demo Day курса «C# ASP.NET Core разработчик»

  4. API Design and Management (API Docs)

  5. JSON:API demo

  6. Experts demo an app built using OpenText APIs

COMMENTS

  1. Presentation Receiver API Sample

    Presentation Receiver API Sample. Available in Chrome 59+ | View on GitHub | Browse Samples. Background. This sample illustrates the use of Presentation API, which gives the ability to access external presentation-type displays and use them for presenting Web content.The PresentationRequest object is associated with a request to initiate or reconnect to a presentation made by a controlling ...

  2. Presentation API

    The Presentation API lets a user agent (such as a Web browser) effectively display web content through large presentation devices such as projectors and network-connected televisions. Supported types of multimedia devices include both displays which are wired using HDMI, DVI, or the like, or wireless, using DLNA, Chromecast, AirPlay, or Miracast.. In general, a web page uses the Presentation ...

  3. Presentation API Demo

    This demo page provides an example of the video sharing use case of Presentation API which is being developed in the Second Screen Presentation Community Group in W3C. Downloading the binaries, or building Chromium with the patches listed below applied to your tree allows you to run the demo. Opening this page in the modified version of ...

  4. Presentation API demos

    Presentation API demos. In the spirit of experimentation, the Second Screen Presentation Community Group has been working on a series of proof-of-concept demos for the Presentation API, using custom browser builds and/or existing plug-ins to implement or emulate the Presentation API, when available, or falling back to opening content in a ...

  5. Presentation API

    The Presentation API aims to make presentation displays such as projectors, attached monitors, and network-connected TVs available to the Web. It takes into account displays that are attached using wired (HDMI, DVI, or similar) and wireless technologies (Miracast, Chromecast, DLNA, AirPlay, or similar). Devices with limited screen size lack the ...

  6. Presentation API Demo

    Presentation API Demonstration. Presentation Session. Start Close

  7. HTML Slidy remote A Presentation API demo

    How the demo works: sender side. When the user enters the URL of a slide show, the demo page: checks its origin and rejects unknown ones; calls navigator.presentation.requestSession with the appropriate receiver app; uses the returned PresentationSession object to tell the receiver app to load the slideshow; displays the Slidy remote.

  8. Presentation Controller API (Google Cast) Sample

    A presentation can be started by calling the start() method on the PresentationRequest object. Note that this demo uses a cast: URL to start the presentation instead of the receiver page's URL. This will load the receiver page on a Chromecast, but the sender page will be unable to communicate with it as the Chromecast does not implement the ...

  9. Presentation API Demo

    Presentation API Demo. This directory contains a demo of a Presentation API controller and receiver. The demo supports flinging a URL to start a presentation and stopping the presentation. Command line options. The same executable is run for the controller and receiver; only the command line options affect the behavior. The command line options ...

  10. presentation-api-demo/index.html at master

    Web Page Demo of Presentation API https://labs.othersight.jp/presentation-api-demo/ - tomoyukilabs/presentation-api-demo

  11. Presentation API 3.0

    1.1. Objectives and Scope. The objective of the IIIF (pronounced "Triple-Eye-Eff") Presentation API is to provide the information necessary to allow a rich, online viewing environment for compound digital objects to be presented to a human user, often in conjunction with the IIIF Image API.

  12. Samples

    The examples listed in this section demonstrate how to express common actions in Slides as Slides API requests. These examples are presented as HTTP requests to be language neutral. To learn how to implement Slides API request protocols in a specific language using Google API client libraries, see the following guides: Create a slide.

  13. Present web pages to secondary attached displays

    Chrome 66 allows web pages to use a secondary attached display through the Presentation API and to control its contents through the Presentation Receiver API. Get inspired Blog Docs Build with Chrome ... I recommend the interactive Photowall demo as well. This web app allows multiple controllers to collaboratively present a photo slideshow on a ...

  14. Presentation API demo + experimental Chromium build

    Languages. JavaScript 90.8%. CSS 9.2%. Presentation API demo + experimental Chromium build - webscreens/demo.

  15. The Best APIs to Create PowerPoint Presentations

    Powerpoint Generator API. Google Slides API. Microsoft Graph API. We'll also dive deeper into the following specific use case: Using an API to generate reports automatically. Especially with the ...

  16. Presentation

    Presentation | Android Developers. Essentials. Gemini in Android Studio. Your AI development companion for Android development. Learn more. Get Android Studio. Get started. Start by creating your first app. Go deeper with our training courses or explore app development on your own.

  17. HTML Slidy remote

    The shim could be re-used in other demos. License. The source code is available under the W3C Software license. Contact. For feedback on the demo or on the Presentation API itself, use the [email protected] mailing-list (with public archive) or get in touch with Francois Daoust if you do not wish your comment to appear in public.

  18. The HTML presentation framework

    Create Stunning Presentations on the Web. reveal.js is an open source HTML presentation framework. It's a tool that enables anyone with a web browser to create fully-featured and beautiful presentations for free. Presentations made with reveal.js are built on open web technologies. That means anything you can do on the web, you can do in your ...

  19. SlideSpeak API

    AI generate presentations and summaries. Summarize and generate PowerPoint files with ChatGPT. Upload, chat, and gain new insights from your presentations. Use SlideSpeak AI to boost your productivity. Use the SlideSpeak API to generate presentations, summarize presentations and more. Create PowerPoints using our API interface.

  20. Presentation

    The Presentation can be defined as two possible user agents in the context: Controlling user agent and Receiving user agent. In controlling browsing context, the Presentation interface provides a mechanism to override the browser default behavior of launching presentation to external screen. In receiving browsing context, Presentation interface ...

  21. Introduction

    Start using the REST API's. See REST API's. 3. Purchase a subscription. PresentationGPT gives you free access to the API except for the download links of the files. In order to get access to the PPTX, PDF & Google Slides links you will need to pay for an API plan. Navigate to the API keys page in your dashboard to subscribe to an API plan.

  22. Api Example Template

    Showcase the comparison between web services and API in order to make your presentation effective. Also, showcase the methods involved in API testing such as GET, POST, Delete, and PUT by utilizing this interface PPT layout. There are various high-quality icons present in the slide which will make your presentation informative and reliable.

  23. PDF E-Rate Open Data Course 1: Open Data Overview August 27, 2024

    during the presentation. • Write in full sentences. • Ask one question at a time. • Ask questions related to today's webinar content. • To view answers: • Click the box with the arrow icon in the top right corner of the Questions box to expand it and reveal all written answers. 5 Housekeeping: Q&A

  24. What is a Product Demonstration

    A product demonstration is a focused presentation that showcases a product's key features, benefits, and applications. It's a vital tool in sales and marketing, designed to: Highlight the product's value; Illustrate its practical uses; Address potential customer needs; Product demos can take various forms: Live presentations; Interactive ...

  25. Power BI August 2024 Feature Summary

    Welcome to the August 2024 update. Here are a few, select highlights of the many we have for Power BI. You can now ask Copilot questions against your semantic model. Updated Save and Upload to OneDrive Flow in Power BI and Narrative visual with Copilot is available in SaaS embed. There is much more to explore, please continue to read on!

  26. presentation-api-demo/viewer.html at master

    You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window.

  27. PDF International Space Station (ISS) as a Testbed For Exploration ECLSS

    • This presentation describes the overall effort, its integration into the ISS vehicle, and its progress ... •H2 Sensor Tech Demo was relocated to Oxygen Generation System (OGS) Rack upon relocation to the Lab in Aug 2022 •1 of 4 sensors is failed - others are demonstrating good performance

  28. .github/profile/reference-implementation.md at main · eu-digital

    Currently, it also includes Demo App, demonstrating the following capabilities: Proximity presentation, and Same Device Online Presentation and issuing of PID and mDL. Verifier Apps and Services. Repository ... Restful API (web-services) Demo Web Verifier application (Backend Restful service) that acts as a Verifier/RP trusted end-point ...

  29. Gemini Nano language detection API available for early preview

    Demo. Once you've signed up and been accepted to the EPP, you'll have access to a demo so you can experiment with this API. Join the early preview program. As of now, the Prompt API, summarization API, and the language detection API are available for prototyping.

  30. Qwen/README_CN.md at main · QwenLM/Qwen · GitHub

    最简单的使用Qwen模型API服务的方法就是通过DashScope(阿里云灵积API模型服务)。我们提供了简单介绍说明使用方法。同时,我们还提供了自己部署OpenAI格式的API的方法。 DashScope是阿里云提供的大语言模型的API服务,目前支持Qwen。