Settings
These are the settings that are using the sandbox
Sandbox Variables
This is a key-value dictionary-like configuration. It allows you to create some predefined variables and have them accessible accross all sandbox usages.
Match database name
Does not use the sandbox directly but a simplified version of it. It returns the name of the matches database (customer data).
Capabilities
- Parses the code like a string template
- Sandbox Variables
project,branch,clientdocuments
Code example
db-${client.name}-${project.name}-drinks
Verify function
Runs custom logic to mark which scans need manual verification for a given zone. The function is supposed to set flags via the provided helper setVerificationNeeded.
Capabilities
- setVerificationNeeded(scanIds[], boolean)
zone(current zone document)sessionId(current session id)project,branch,clientdocuments- Sandbox Variables
- Standard sandbox helpers
Code example
const sessions = await findZoneSessions({ zoneId: zone._id });
for (const zoneSession of sessions) {
const scans = await findScans({ originalSessionId: zoneSession._id });
for (const scan of scans) {
const needsReview = scan.count === 0;
await setVerificationNeeded(scan._id, needsReview);
}
}
await log(`Verify run for project ${project.code}, zone ${zone.code}`);
Transform Scans
Runs custom logic to calculate prices and pieces for scans in a zone. The function sets the values in the scan document via setPrice and setPieces.
Capabilities
setPrice(scanId, value)sets the price for a scansetPieces(scanId, value)sets the pieces for a scanscans[]array of all scans in the zoneproject,branch,clientdocuments- Sandbox Variables
- Standard sandbox helpers
Code example
for (const scan of scans) {
// Round prices to 2 decimal places
const roundedPrice = Number((scan.price || 0).toFixed(2));
await setPrice(scan._id, roundedPrice);
// Calculate pieces based on case pack if available
if (scan.match?.CasePack) {
const pieces = scan.count * scan.match.CasePack;
await setPieces(scan._id, pieces);
}
}
await log(`Transformed ${scans.length} scans for zone ${zone.code}`);
QM Module - Scan Configuration
Runs custom quality rule check as a side-effect after a session creation or update. Each rule should use the setResult function with 2 parameters, the scan's id (or a list of ids) and an object. The object should contain 3 fields:
resultwhich is a boolean to signify if a rule was triggeredvaluewhich can be anything but can be used to record the value that triggered the rulelevelthe severity of the quality bridge. Can be one of the following- info
- warning
- notify
Capabilities
scancurrent scan document- Sandbox Variables
- Standard sandbox helpers
Config example
{
"priceWarning": {
"labels": {
"en": "Scan Price < 100",
"de": "Scan Preis < 100"
},
"code": "return {result: scan.price < 100, value: scan.price, level: \"warning\"};"
},
"countNotify": {
"labels": {
"en": "Scan count > 100",
},
"code": "scans.forEach(scan => { setResult(scan._id, {result: scan.count > 100, value: scan.count, level: 'notify'}); });"
}
}
QM Module - Zone Configuration
Similar to the Scan Configration but runs as a side-effect at the end of a zone manipulation.
Capabilities
Similar to Scan Configuration but instead of scan document it exposes the zone document
Config example
{
"BusyZone": {
"labels": {
"en": "Zone is busy",
},
"code": "setResult(zone._id, {result: zone.status.includes('busy'), value: zone.status, level: 'warning'});"
}
}
Code for generating checklists (PDF Job)
Generates PDF checklists for zones using custom code. Can run per zone to produce a pdf or for multiple zones to produce a zip with all pdfs.
Capabilities
writePdf(data, nameWithoutExtension)creates a PDF documentzonecurrent zone documentscanFlowConfiga JSON configuration about the scan flowhasVerifyReasonsboolean indicating if verification reasons are enabledscanFlowConfigparsed scan flow configurationcontainsDifferentCount(data, activeSessions)returns true if a scan or scan group has different counts across revisions in active sessionsstocktraceScansFieldMappingmapping object that translates scanner field names to Stripes scan field names (e.g., 'Barcode' → 'matchCode', 'ScanID' → 'uuid', ...)jobdocument with logging and progress functionalities- Sandbox Variables
- Standard sandbox helpers
- Enhanced sandbox helpers
Code example
await job.log(`Generating checklist for zone ${zone.code}`);
// Build PDF document definition using pdfmake format
const docDefinition = {
content: [
{ text: `Checklist for Zone: ${zone.code}`, style: 'header' },
{ text: `Project: ${project.name}`, style: 'subheader' },
{ text: `Generated: ${moment().format('YYYY-MM-DD HH:mm')}`, margin: [0, 10, 0, 10] },
{
table: {
headerRows: 1,
widths: ['*', 'auto', 'auto'],
body: [
['Scan Code', 'Count', 'Status'],
...scans.map(scan => [
scan.code || scan.match?.productCode || 'N/A',
scan.count || 0,
scan.needsVerification ? 'Needs Review' : 'OK'
])
]
}
}
],
styles: {
header: { fontSize: 18, bold: true, margin: [0, 0, 0, 10] },
subheader: { fontSize: 14, bold: true, margin: [0, 0, 0, 5] }
},
defaultStyle: { font: 'Roboto' }
};
await writePdf(docDefinition, zone.code);
await job.log(`Checklist generated successfully for zone ${zone.code}`);
Export Config
Defines export runs that represent export jobs. Each run contains JavaScript code that generates files (CSV, XLSX, PDF, ZIP) and in certain cases, creates export sessions. The configuration supports inheritance across setting levels, by combining all unique names of export runs.
Export configurations are merged from multiple setting levels in this order: Global settings < Client settings < Branch settings < Project settings
Export Run Object Structure
Each export run in the exportRuns object contains:
code(string, required) JavaScript code to be executed. The result is saved onjob.data. Note that this will need be added as a string, so you need to properly stringify them. In the examples bellow it is shown as non stringified code for easier viewing. Also, we supportyamlconfiguration so it might be the case the code doesn't need to be stringified if the config is written in yaml format, but it has not been tested yet.title(string, optional) The name of the exportgroup(string, optional) The name of the export-groupallowWhenProjectNotClosed(boolean, optional) Determines if the export can be started before the project is closedisHidden(boolean, optional) Hides this export from the clientexportVariables(object, optional) A list of variables. These variables are available during export onjob.data.exportVariablesobject as key-value pairs (e.g{ key1: "value 1", key2: "value 2" })
Capabilities
insertExportSessions(zoneIds)Creates export sessions for zones and returns session datastartExportSession(sessionId)Marks an export session as started and sets the zone status accordinglycloseExportSession(sessionId)Marks an export session as closed and sets the zone status accordinglyfindZoneChangeRequests(selector)Finds zone change requests for the projectfindCompletedExportJobs(selector)Finds completed export jobs for the projectfindOneZoneChangeRequest(selector)Finds a single zone change requestfindOneCompletedExportJob(selector)Finds a single completed export jobwriteFile({ name, data })Creates a file and uploads it to the cloud storagegeneratePdfsForZones(zoneIds)Generates PDFs for zones using the Checklist CodesuccessfulExportsCountNumber of successful export jobs for this export type- _ Deprecated. The underscore.js library
- JSZip ZIP file creation library
- csvStringify CSV generation library
- xlsx Excel file generation library
reportHelpersHelper functions for report generation:getZonenCount(projectId) Returns an array of zone statistics including total scans, verified scans, edited scans, products, and values. Includes a summary row ("Gesamt") with totals. Numbers are formatted with German locale (comma as decimal separator)getKorrekturen(projectId) Returns an array of scan corrections/changes showing old and new counts, with zone, match code, description, price, counter, and validator informationgetZonenSummary(projectId) Returns a summary object with total zone count, count of blocked zones, and count of verified zonesgetCounterValidator(projectId) Returns an array of zones with the registered users that did the original count (counter) and the subsequent verification (validator), sorted by zone code
- Sandbox Variables
- Standard sandbox helpers
- Enhanced sandbox helpers
Config Example
exportRuns: {
closeout: {
title: "Closeout Export"
allowWhenProjectNotClosed: false
code: {
await job.log('Starting closeout export');
// Get zones to export
const zones = await findZones({ projectId: project._id, deleted: false });
const zoneIds = zones.map(z => z._id);
// Create export sessions
const sessions = await insertExportSessions(zoneIds);
await job.log(`Created ${sessions.length} export sessions`);
// Generate CSV file
const csvData = [];
csvData.push(['Zone Code', 'Scan Code', 'Count', 'Price']);
for (const zone of zones) {
const scans = await findScans({ zoneId: zone._id, deleted: false });
for (const scan of scans) {
csvData.push([
zone.code,
scan.code || scan.match?.productCode || 'N/A',
scan.count || 0,
scan.price || scan.match?.price || 0
]);
}
}
const csvBuffer = await csvStringify(csvData);
await writeFile({ name: 'closeout-export.csv', data: csvBuffer });
// Close all sessions
for (const session of sessions) {
await closeExportSession(session.zoneSessionId);
}
await job.log('Closeout export completed');
return { filesCreated: 1, sessionsProcessed: sessions.length };
}
},
summary: {
title: "Summary Report"
allowWhenProjectNotClosed: true
code: {
await job.log('Generating summary report');
const zones = await findZones({ projectId: project._id });
const pdfs = await generatePdfsForZones(zones.map(z => z._id));
// Create ZIP with all PDFs
const zip = new JSZip();
pdfs.forEach((pdfBuffer, index) => {
zip.file(`zone-${zones[index].code}.pdf`, pdfBuffer);
});
const zipBuffer = await zip.generateAsync({ type: 'nodebuffer' });
await writeFile({ name: 'summary-report.zip', data: zipBuffer });
await job.log('Summary report generated');
return { filesCreated: 1 };
}
}
}
Import Config
The Import System allows projects to upload CSV, Excel, or XML files and convert their rows into structured data inside the Stripes database. Each import configuration defines:
- How a file should be parsed
- Which entity type should be created or updated (
Match,Zone,Scan,Stock) - How to interpret each column
- Optional transformation logic before/after inserting data
Just like the Export Config, import configurations are defined as part of the settings and may contain multiple files, each creating a new entry in the Import page. Note that files are an accumulation of hierarhy in following the order global < client < branch < project. If a file's ID is found lower in the hierarchy, it gets replaced.
File Structure
An Import configuration contains:
- Files[] Each describing one importable file type
- Each file has one or more Runs[], defining:
- Destination entity (Match, Zone, Scan, Stock)
- Field mapping
- Parser type
- Hooks (PreHook, PreInsertHook, PostHook)
- Optional custom row parser
Supported File Types
The importer automatically detects which parser to use based on FileConfig:
CSV
- Supports header row or manual header definitions
- Controls:
DelimiterTextQualifierEncoding(utf8 by default)SkipLinesHeaders
Excel (.xlsx)
- Uses SheetJS
- Supports:
SheetName(name or index)HeadersSkipLines
XML
- Supports simple and complex XML structures
- Controls:
IsComplex(enables advanced flattening)RowSchema(JSON schema for complex XML)ResourcePath(defaults to'/')AttrsKey/TextKey
Import Object Structure
File Configuration
A file import definition contains:
ID(string, required): Unique identifier of the import fileGroupName(string, optional): Logical grouping of filesHeadersoptional explicit column namesFileDescription(string, optional): Human-readable descriptionRemarks(string, optional): Notes for preparing the fileSkipLinesnumber of top rows to ignoreEncoding((ascii|utf8|utf16le|ucs2|base64|latin1|binary|hex)) CSV only, the charset of the fileDelimiter(integer, optional): Character code for field delimiter (e.g., 44 for comma)TextQualifier(integer, optional): Character code for text qualifier (e.g., 34 for ")FilenameRegex(string, optional): Regex to match file namesParserType((CSV | Excel | XML), optional): Parser type for the file (e.g., "Excel")IsHidden(boolean, optional): Hides file from UIAutoCreateJob(boolean, optional): Automatically create import jobs when file is uploadedRuns(array, required): Defines multiple import runs for this fileContinueWithThe id of the job to continue with after this is finished.
Example:
Files: [
{
"ID": "zones-upload"
"ParserType": "CSV"
"SkipLines": 1
"Runs": [
{
"DestinationEntityName": "Zone"
"Fields": [
{
"FieldHeader": "Zone ID"
"DBFieldNames": "code"
"Operations": [
{
"Pad": [13, "0"]
}
]
}
]
}
]
}
]
Run Configuration
Each run inside Runs[] supports:
DestinationEntityName(required). Must be one of:- Match
- Zone
- Scan
- Stock
Fields[] - A list of field mappings. Each field specifies:FieldHeaderName of the column in the input fileDBFieldNamesOne or more internal field names (comma-separated)StaticValue(optional) Use a fixed value instead of reading from a fileOperations(optional) Basic transformations (uppercase, trim, default values, etc.)
RowParser(optional) - A JavaScript function (sandboxed) that can:- Transform one row into many rows
- Normalize field formats
- Apply arbitrary business logic
PreHook(optional) Executes before reading the file. (e.g. truncate tables, cache lookups, prepare temp data)PreInsertHook(optional) Executes for each transformed row before validation/insertionPostHook(optional) Executes after all rows have been imported
How File Parsing Works
All file types are transformed into a stream of row objects, so even huge files can be imported safely without loading everything into memory.
CSV
Each row becomes a JavaScript object of { columnName: value }.
Excel
Converted using xlsx.utils.sheet_to_json, respecting:
- Headers
- Skipped lines
- Selected sheet
XML
Depending on IsComplex:
- Simple → flattened using element text
- Complex → deep flattening according to RowSchema
Automatic Header Validation
Before importing, the system ensures:
- All required column headers exist
- Missing headers result in:
missing-headers: Missing header(s) ...
Row Transformation Layer
Before a row can be inserted, it passes through several steps:
- Field Mapping (Fields[]) Each field configuration applies:
FieldHeader→ read from rowOperations→ basic transformationsDBFieldNames→ nested fields using dot notation- Removes unused fields automatically
Example:
"Fields": [
{
"FieldHeader": "z_EAN",
"DBFieldNames": "code",
"Operations": [
{
"Pad": [13, "0"]
}
]
},
{
"FieldHeader": "z_ArtNo",
"DBFieldNames": "productCode"
},
{
"FieldHeader": "d_SalesPrice",
"DBFieldNames": "price",
"Operations": [
{
"ToDecimal": [44]
},
{
"DivideBy": [100]
}
]
},
]
-
Static Values If
StaticValueis set, the file column is ignored. -
Custom RowParser (optional) A sandboxed JavaScript function allowing:
- Custom transformation logic
- Returning multiple output rows (e.g. splitting one row into many)
- Lookups based on DB or project settings
- Format conversions
Example RowParser:
if (!this.Code) return [];
return [{ code: this.Code.trim() }];
-
fixData (PreInsertHook) Another sandboxed function executed for each row after field mapping and RowParser.
-
Validation Each entity type has its own validation rules.
-
Insert/Update into DB Handled by the destination entity runner.
Import Types & Their Capabilities
Match Import
Imports match codes, used by scans.
Key Responsibilities
- Create or update match entries
- Ensure uniqueness
- Ensure valid price formatting
- Create indexes for fast scan lookups
Validation Rules
codemust not be emptytypemust not be emptypriceis converted to a number- Duplicate
codeentries in the same file are rejected
Bulk Behavior
Data is buffered in batches of 10,000 rows
Then inserted using insertMany for performance
Capabilities
collection- access to the equivalent database collection through mongo driverjob- the database document extended with alogfunction that gathers all logs in a file and does some truncations and warning squashing for the UIproject,branch,clientdocuments- Sandbox Variables
- Standard sandbox helpers
Zone Import
Imports zones their codes, labels, attributes, etc.
Key Responsibilities
- Insert new zones
- Update existing zones
- Prevent modification of internal fields (flags, status, scanSessions)
- Block deletion if zone has activity
Validation Rules
- Duplicate zone codes in the SAME import file → error
projectIdmust match- Schema validation trims fields and sets defaults
- On
deleted: true→ rejects if zone has session activity
Insert/Update Logic
- If zone exists → update
- Else → insert
- Internal properties are preserved
Capabilities
job- the database document extended with alogfunction that gathers all logs in a file and does some truncations and warning squashing for the UIproject,branch,clientdocuments- Sandbox Variables
- Standard sandbox helpers
Scan Import
Creates validated scan entries AND automatically creates scan sessions for each (zone, user) pair.
Key Responsibilities
- Validate uploaded scans
- Ensure users & zones exist
- Generate sessions on the fly
- Attach each scan to a session
- Validate schema & price
- Recompute zone metadata and verification flags
Required Columns
usernamezoneCodemodifiedAt(optional defaults to import time)
Session Creation Logic
- Each unique (
zoneCode,username) → one session - Client users automatically flagged
Validation
- Schema ensures proper fields
match.priceforced to number- Missing prices →
0
Post-Processing
Executes most of the side-effects a normal session would trigger
- Transform Scans
- Verification-needed flags
- Zone metadata & status
- Empty zone flag
- Client session flag
Capabilities
job- the database document extended with alogfunction that gathers all logs in a file and does some truncations and warning squashing for the UIproject,branch,clientdocuments- Sandbox Variables
- Standard sandbox helpers
Stock Import
Imports stock levels linked to match codes.
Key Responsibilities
- Validate match references
- Validate numeric expected values
- Insert/update stock items
Validation Rules
matchcannot be emptyexpectedmust be a numbermatch.pricemust be numeric
Capabilities
collection- access to the equivalent database collection through mongo driver. Only specific methods are allowed. Every method on the collection is limited to the same projectfindfindOneinsertOneinsertManyupdateOneupdateManydeleteOnedeleteMany
job- the database document extended with alogfunction that gathers all logs in a file and does some truncations and warning squashing for the UIproject,branch,clientdocuments- Sandbox Variables
- Standard sandbox helpers
Hooks (All Import Types)
PreHook executes once before the run
PreInsertHook executes on each row after mapping
PostHook executes after all rows inserted
Field Mapping Reference
Each entry in Fields[] supports:
FieldHeaderfile column nameDBFieldNamesinternal Stripes field pathsStaticValueoverrides file valueOperationstransformations (trim,uppercase,toNumber,toBoolean)
Config Example
{
"Files": [
{
"ID": "fdd775a2-9b18-42a6-9053-2747fb12f5d2",
"GroupName": "group1",
"FileDescription": "Anmeldungen",
"Remarks": "csv layout: UserCode;FunctionFlag",
"Delimiter": 59,
"TextQualifier": 34,
"FilenameRegex": ".+[.]csv$|.+[.]CSV$",
"AutoCreateJob": true,
"Runs": [
{
"DestinationEntityName": "Match",
"PreInsertHook": "if (this.UserCode) { if (this.UserCode.length === 5) { this.UserCode = this.UserCode.padStart(6, '0'); } else if (this.UserCode.length !== 6) { delete this.UserCode; } }",
"Fields": [
{ "FieldHeader": "Scan-ID", "DBFieldNames": "code" },
{ "FieldHeader": "RoleCode", "DBFieldNames": "extra1", "StaticValue": "" },
{ "StaticValue": "P", "DBFieldNames": "type" }
]
}
]
},
{
"ID": "e75beecc-6a7b-48c1-bfae-296b548acc18",
"GroupName": "Zones",
"FileDescription": "Zone Import + Update",
"Remarks": "Zone Columns: from, to, Division, Extra1",
"ParserType": "Excel",
"IsHidden": true,
"AutoCreateJob": true,
"Runs": [
{
"DestinationEntityName": "Zone",
"RowParser": "const data = []; const requiredHeaders = ['from', 'to']; const currentHeaders = Object.keys(this); let invalidHeader = false; requiredHeaders.forEach(header => { if (!currentHeaders.includes(header)) { invalidHeader = true; job.log(`Required Column \"${header}\" is missing`); } }); if (!invalidHeader) { const from = parseInt(this.from.replace(/\D/g, ''), 0); const to = parseInt(this.to.replace(/\D/g, ''), 0); if ((to - from) > 1000) { throw new Error(`Range ${from} to ${to} contains more than 1000 zones.`); } for (let i = from; i <= to; i++) { const zone = { code: `A-${String(i).padStart(4, '0')}`, division: this.Division || '', extra1: this.Extra1 || '', warehouseCode: 'test' }; data.push(zone); } } return data;",
"Fields": []
}
]
},
{
"ID": "ea50258a-4c04-4108-a7a3-3384966600df",
"GroupName": "ArticleFiles",
"FileDescription": "Stammdaten",
"Delimiter": 9,
"TextQualifier": 25,
"AutoCreateJob": true,
"Runs": [
{
"DestinationEntityName": "Match",
"PreHook": "collection.drop()",
"Fields": [
{ "FieldHeader": "z_EAN", "DBFieldNames": "code", "Operations": [{"Pad": [13, "0"]}] },
{ "FieldHeader": "z_ArtNo", "DBFieldNames": "productCode" },
{ "FieldHeader": "DeptDescr", "DBFieldNames": "extra1" },
{ "FieldHeader": "d_ArtDescr", "DBFieldNames": "description" },
{ "FieldHeader": "d_SalesPrice", "DBFieldNames": "price", "Operations": [{"ToDecimal": [44]}, {"DivideBy": [100]}] },
{ "FieldHeader": "GGRNo", "DBFieldNames": "extra2" },
{ "FieldHeader": "DeptNo", "DBFieldNames": "extra3" },
{ "FieldHeader": "MFRName", "DBFieldNames": "extra5" },
{ "StaticValue": "P", "DBFieldNames": "type" }
]
}
]
}
]
}
Registration Config
Defines custom hooks for user registration imports from XLSX files. Supports three hooks: PreHook (before parsing the file), RowParser (per-row transformation), and PostHook (after all registrations complete).
Registration Config Object Structure
The RegistrationConfig object contains:
PreHook(string, optional) JavaScript code executed before parsing any rows. Runs once at the start of the import.RowParser(string, optional) JavaScript code executed for each row/user object. Must return the transformed user object. Used to normalize, validate, or enrich row data before processing.PostHook(string, optional) JavaScript code executed after all registrations are processed. Runs once at the end of the import, and also triggered via UI-created registrations.
Capabilities
- PreHook:
jobjob object withlog()methodproject,branch,clientdocuments
- RowParser:
jobjob object withlog()methodproject,branch,clientdocumentsuserraw user object from the Excel row (can be modified and returned)
- PostHook:
jobjob object withlog()methoduserIdsarray of user IDs that were created or updated during the importproject,branch,clientdocuments
Config Example
{
"PreHook": "await job.log(`Starting registration import for project ${project.code}`);",
"RowParser": "user.firstName = user.firstName?.trim(); user.lastName = user.lastName?.trim(); user.language = user.language || 'en'; user.role = user.role || 'employee'; return user;",
"PostHook": "await job.log(`Registration import completed. Created/updated ${userIds.length} users`); if (userIds.length > 0) { await log(`New users registered: ${userIds.join(', ')}`); }"
}
Sandbox Capabilities
Sandbox
Used broadly (verify function, imports, registrations, QM, etc.) for custom code.
const flagged = await findScans({ needsVerification: true });
await log(`Flagged scans: ${flagged.length}`);
return flagged.map(scan => scan.code);
Enhanced Sandbox
Used mainly by checklist (pdfs), export and import configurations, with libraries and richer job that allows to control the job's logs and progress.
await job.log('Export started');
await job.progress(50, 100);
return { ok: true };
Global Sandbox Capability Matrix
These are the built‑in capabilities of each sandbox flavor (everything flow/job‑specific is described in the sections above).
| Capability | sandbox | enhanced sandbox |
|---|---|---|
log() accessible from the system menu -> Enable debugging | ✅ | ✅ |
findMatches() | ✅ | ✅ |
findScans() | ✅ | ✅ |
findUsers() | ✅ | ✅ |
findZones() | ✅ | ✅ |
findZoneSessions() | ✅ | ✅ |
findRegistrations() | ✅ | ✅ |
findAssets() | ✅ | ✅ |
updateRegistrations() | ✅ | ✅ |
findOneMatch() | ✅ | ✅ |
findOneScan() | ✅ | ✅ |
findStock() | ✅ | ✅ |
findOneStock() | ✅ | ✅ |
findOneUser() | ✅ | ✅ |
findOneZone() | ✅ | ✅ |
findOneZoneSession() | ✅ | ✅ |
SandboxVariables | ✅ | ✅ |
job.log() (when a job is passed) | ❌ | ✅ |
job.progress() (when a job is passed) | ❌ | ✅ |
| moment | ❌ | ✅ |
| PdfPrinter | ❌ | ✅ |
| path | ❌ | ✅ |
| Buffer | ❌ | ✅ |
Assets which exposes absoluteFilePath(filePath) and getBinary(filePath) to help you read a file from the system | ❌ | ✅ |
prettyNumber A method that returns a representation with an accuracy of up to fourteen digits | ❌ | ✅ |
project, branch, client database documents | ❌ | ✅ |
References
Many information about the various configrations can be found here.