FORGE 2026: The International workshop 2026 on Secure and Private Federated & Distributed AI for Industrial Networks.
Co-Located with FLiCS 2026 - Valencia, Spain - June 9-12, 2026.
FORGE, Secure and Private Federated & Distributed AI for Industrial Networks, invites submissions of original research contributions in Privacy and Security. Accepted papers will be presented at the event and included in the proceedings.
Motivated by the rapid emergence of distributed and federated learning systems, this workshop will examine the privacy and security properties of federated AI in the context of industrial and softwarized networks. The focus is on how federated and distributed learning can both introduce new security challenges and enable novel privacy-reserving and security-enhancing mechanisms for industrial environments.
A representative use case is the protection of sensitive industrial data. For example, proprietary process parameters or “secret recipes” may be inferred from operational data*. This becomes a critical issue when data must be uploaded to the cloud for analytics or monitoring. Federated and distributed approaches promise to keep data local while still enabling powerful AI-based functions such as anomaly detection and intrusion prevention. Topics such as distributed anomaly detectors and privacy-preserving techniques (e.g., approaches such as D3) are therefore particularly relevant to this workshop.
At a glance
Scope and topics
We welcome submissions including (but not limited to) the following topics.
Privacy-preserving federated learning for industrial networking and control systems (e.g., PLCs).
- Threat-driven privacy requirements in OT (process parameters, recipes, operational signatures) and privacy budgeting.
- Edge/PLC constraints: real-time deadlines, CPU/RAM limits, intermittent connectivity, and safe scheduling of training.
- Mechanisms and trade-offs: differential privacy, secure aggregation, encrypted/attested updates, and trusted execution environments.
- Heterogeneous and non-IID data across plants/lines: personalization, domain adaptation, and concept-drift handling under privacy constraints.
- Secure deployment: signed models, attestation, rollback/fail-safe procedures, and safety validation for control impact.
Distributed and federated security learning for industrial production lines.
- Cross-site collaborative detection for rare attacks and low-prevalence faults (label scarcity and semi-/self-supervised learning).
- Federated transfer learning for commissioning new production lines or equipment (cold-start and few-shot adaptation).
- Multi-source correlation across PLC/HMI/SCADA, historians, gateways, and network sensors without centralizing raw data.
- Continual/online learning under process drift (maintenance, seasonality, product changes) with safe update gating.
- Operationalization: federated SecMLOps (versioning, monitoring, canaries, incident response, and reproducibility).
Security-by-design methodologies for AI-based anomaly detection in industrial systems.
- Secure data lifecycle: provenance, integrity checks, sensor trust scoring, and tamper-evident logging.
- Design-time threat modeling for AI pipelines (poisoning, evasion, stealth, insider misuse) and explicit security requirements.
- Robust evaluation protocols beyond accuracy: cost-sensitive false alarms, detection delay, operator workload, and resilience metrics.
- Fail-safe and safety-aware responses: alerting vs. automated mitigation with hard constraints to avoid unsafe interventions.
- Explainability for OT operators: mapping detections to industrial semantics (tags, assets, control loops) to reduce triage time.
AI-based PLC intrusion detection and prevention systems (IDS/IPS).
- Protocol-aware modeling (e.g., Modbus/TCP, S7, OPC UA, EtherNet/IP): command semantics, stateful validation, and sequence constraints.
- Edge behavioral baselining of PLC logic and I/O patterns with scan-cycle-safe inference and minimal latency overhead.
- Prevention strategies: command whitelisting, rate limiting, policy enforcement, and safety interlocks suitable for industrial operations.
- Attack taxonomy and signals: logic/firmware tampering, parameter manipulation, replay, reconnaissance, and lateral movement.
- Testbeds and reproducibility: realistic datasets, digital twins, and safe red-team methodologies for PLC IDS/IPS evaluation.
Architectures and protocols for secure, softwarized industrial networks leveraging federated AI.
- Reference architectures across device/edge/plant/cloud tiers and placement of orchestration, aggregation, and policy enforcement.
- Secure orchestration: authentication/authorization, key management, remote attestation, and signed/verified model updates.
- Segmentation, SDN, and QoS: network slicing and traffic isolation while preserving deterministic communications.
- Resilience and availability: partition tolerance, failover strategies, intermittent links, and disaster recovery for federated services.
- Performance engineering: bandwidth caps, compression/quantization of updates, and scheduling training around production constraints.
Threat modeling, attack surfaces, and defenses specific to federated learning in industrial environments.
- FL-specific threats: poisoning/backdoors, sybil clients, gradient leakage, membership inference, and compromised aggregators.
- Industrial attacker models: insiders, supply-chain compromise, and adversaries constrained by safety-critical processes.
- Robust aggregation and update validation: outlier resistance, client reputation, anomaly scoring on model deltas, and rollback triggers.
- Privacy vs. robustness trade-offs: secure aggregation, TEEs, and cryptographic protections versus detectability of malicious updates.
- Red-teaming and monitoring of training dynamics: drift alarms, forensic readiness, and continuous security assessment.
Privacy-preserving distributed anomaly detection for sensitive industrial data.
- Risk of inferring proprietary “secret recipes” from operational data: model inversion, membership inference, and linkage attacks.
- Distributed detectors without raw data sharing: site-local models with federated calibration, ensembles, or split-learning variants.
- Privacy-enhancing techniques in practice: differential privacy, secure aggregation, and selective cryptographic methods under OT constraints.
- Accuracy/privacy/latency trade-offs: impacts of privacy noise and encrypted pipelines on detection delay and false alarm rates.
- Use-case-driven benchmarks: quality deviations, misconfigurations, stealthy intrusions, and lessons learned from deployments.
Deployment, governance, and compliance for secure federated AI in industrial networks.
- Governance for federated pipelines: roles, responsibilities, change control, and approval gates for model updates.
- Auditability and evidence generation: provenance, reproducible training, secure logging, and reporting for regulated environments.
- Lifecycle risk management: vulnerability handling, patching strategies, and model-incident response (containment, rollback, postmortems).
- Operational KPIs: downtime avoided, false alarm cost, time-to-detect, time-to-triage, and operator workload reduction.
- Interoperability and standards alignment: mapping controls to common OT and information-security frameworks.
Submission Guidelines
- Acceptance: Workshop Papers will appear in the main conference IEEE proceedings
- Format: All submissions must follow the FLICS 2026 submission guidelines and use the conference template. Please consult the main conference website for detailed formatting instructions. Remember to add keywords to your submission.
- Long papers: full and mature contributions (8-9 pages)
- Short papers: concise contributions (4-6 pages)
- Posters: early results (1-2 pages)
Important dates
All dates are shown in AoE (UTC−12) unless stated otherwise.
Submission instructions
How to submit
- Submit via: EasyChair
- Format: Submitted papers (.pdf) must use the A4 IEEE Manuscript Templates for Conference Proceedings. Please remember to add Keywords to your submission.
- Length: Long papers: 8-9 Pages, Short Papers: 4-6 Pages, Poster Papers: 1-2 Pages
- Originality: Papers submitted to FORGE must be the original work of the authors and may not be simultaneously under review elsewhere. Prior peer-reviewed publications cannot be submitted. IEEE maintains a strict plagiarism policy; all prior work must be cited appropriately.
- Author List: Submit with the full and final list of authors in the correct order. The registered author list cannot be changed after the submission deadline.
- Proofreading: Carefully proofread your submission. Language should be clear and correct so it is easily understandable. Either US English or UK English spelling conventions are acceptable.
- Publication: All papers that are accepted, registered, and presented will be submitted to IEEE Xplore for possible publication (subject to IEEE requirements).
- IEEE Word Template (A4): Download .docx
- IEEE LaTeX Template (ZIP): Download .zip
- IEEE Overleaf Template: Open in Overleaf
Program highlights
TBD.
Organizing committee
Universitas Mercatorum, Rome, Italy
davide.berardi@unimercatorum.it
University of Bologna, Bologna, Italy
lorenzo.rinieri@unibo.it
FAQ
What timezone are deadlines?
Deadlines are in AoE (UTC−12) unless explicitly stated otherwise.
Is the review double-blind?
No, the review is single-blind.