The only purpose of this page is to answer the most common questions and address complex topics around ingrid.
Ingrid is a Workflow Management System used for high-volume processing data across systems and applications with semantics like logging, auditing, incident management, and more.
Today every successful company uses a numerous set of systems and applications for their business. Numerous solutions are being used for migrating, replicating, cleansing or provisioning data to systems and application. It varies from self-engineered implementations, cloud-only solutions, to highly complex acquisitions. When those solutions work as expected, you probably don’t even need a workflow system. It’s only when things go wrong that a system like Ingrid starts to be valuable.
Your task either achieves its goal, or fails successfully.
On this premise, workflow systems are actually risk management tools, like insurance: there when you need them, invisible when you don’t.
Use Ingrid to author workflows as template definitions (Go template language). Define entrypoint or trigger services that forward data to one of your workflows. You can use additionally services to fetch from databases, write to files, submit to API endpoints, and many more. Provide everything with a clean audit, logging and incident management.
Ingrid can be used for simple data pipelining or for complexer solutions like data provisioning with the right combination of Ingrid services.
Following ??
???Gueti zämefassig vo hüt Nats.io VS Kafka odr allgemein de Architekturapproach vo Ingrid: Smart endpoints and dumb pipes
..is messaging over a lightweight message bus. The infrastructure chosen is typically dumb (dumb as in acts as a message router only) - simple implementations such as RabbitMQ or ZeroMQ don’t do much more than provide a reliable asynchronous fabric - the smarts still live in the end points that are producing and consuming messages; in the services.
Quelle: https://martinfowler.com/articles/microservices.html#SmartEndpointsAndDumbPipes
???
All of our ingrid services accept a unified model resp. our unified ingrid message protocol
??? - Go template engine, deno, …
Buckle up and get ready. A lot of data processing plattforms today are using python as there interpreted programming language to define their business logic. We decided to give Go template a chance since it perfectly fits our use case. Yes, in order to use Ingrid, one should have a look at the Go template language and get used it.
???, DB, HTTP, LDAP
(Cloud Native Computing Foundation)
??? - https://12factor.net/de/ (scalability, etc.)
??? HA
???
???
???
??? - (write here why we only have a minimalistic monitor GUI)
You have come this far, now lets get more concrete.
Data integration involves combining data from different sources while providing a unified view of the combined data, enabling you to query and manipulate all of your data from a single interface and derive analytics and statistics. While the sources and types of data continue to grow, it becomes increasingly important to be able to perform quality analysis on that data.
Source: https://www.alooma.com/blog/what-is-data-integration
The goal of data cleansing is to improve data quality and utility by catching and correcting errors before it is transferred to a target database or data warehouse. Manual data cleansing may or may not be realistic, depending on the amount of data and number of data sources your company has.
Source: https://www.alooma.com/blog/what-is-data-cleansing
Data migration is simply the process of moving data from a source system to a target system. Companies have many different reasons for migrating data. You may want to migrate data when you acquire another company and you need to integrate that company’s data. Or, you may want to integrate data from different departments within your company so the data is available across your entire business. You may want to move your data from an on-premise platform to a cloud platform. Or, perhaps you’re moving from an outdated data storage system to a new database or data storage system. The concept of data migration is simple, but it can sometimes be a complex process.
Source: https://www.alooma.com/blog/what-is-data-migration
In simple terms, data replication takes data from your source databases — Oracle, MySQL, Microsoft SQL Server, PostgreSQL, MongoDB, etc. — and copies it into your data warehouse. This can be a one-time operation or an ongoing process as your data is updated. Because your data warehouse is the important mechanism through which you’re able to access and analyze your data, proper data replication is necessary to avoid losing, duplicating, or otherwise mucking up valuable information.
Source: https://www.alooma.com/solutions/database-replication
Fantastic! We firmly believe if you have competent developers, you can overcome any obstacle.
No, RPA (Robotic Process Automation) is software, which automates human activities that are manual and repetitive. Ingrid focuses on automating processes between endpoints like databases.
List of RPA’s: * UIPath * Blue Prism * Microsoft Power Automate * Zapier
The classical use case for a IAM is to grant a user access to their systems or applications - along with the support of Granular permissions, Multi-factor authentication (MFA), Identity federation and many more. Depending on the IAM solution, some of them offer you user/role provisioning capabilities. Those provisioning capabilities are unfortunately very often limited. Not expandables user and role schemes, only data integration rather then real syncing mechanism, are just one of the known limitation.
In most cases those provisioning capabilities should be enough, but for those reaching the limit, need to reach for another solution. With Ingrid you could assist the provisioning process or even replace completely.
??? - (write here down why you do not need a gui for every use case)
Apache Kafka ist ein Message Broker mit Streaming-Processing Capabilties. Wir verwenden Nats.io. Soll heissen Apache Kafka ist nur ein Teil einer Gesamtlösung.
Well yes, but actually no. Apache Kafka is a message broker resp. a framework implementation of a software bus using stream-processing. Technically speaking, event streaming is the practice of capturing data in real-time from event sources like databases, sensors, mobile devices, cloud services, and software applications in the form of streams of events; storing these event streams durably for later retrieval; manipulating, processing, and reacting to the event streams in real-time as well as retrospectively; and routing the event streams to different destination technologies as needed.
And exactly the manipulating, processing, and reacting to the event streams are done by individually software solutions. Meaning you still need somewhere services that implement the business logic on how to process the event streams. There are uses cases where Ingrid could process those event streams from Apache Kafka.
???
???
???