As others have mentioned this is an extremely odd thing to expect to work....
I'll give an example. I worked for a FTSE 100 company using a very old Product Lifecycle Management system (model manager - based actually on pre-DOS technology)....we had to upgrade it to a new fancy one.
Therefore we had to migrate all data relating to the company, and group companies engineering designs...everything to do with 2D drawings, 3D designs...any important connections etc....all electrical designs....excel sheets related to these containing lists of PCBs and their component parts in Bills Of Materials etc...There is absolutely no way in hell I would trust AI with almost any of that, to get it right....or even to attempt a load without almost immediately erroring.
I have asked AI on multiple occasions to take items from some input and output a table, or a json structure and every time it has simply skipped or ignored several items from the input for no reason.
This sounds like a terrible idea, and nearly impossible to debug when it inevitably drops data.
We're not replacing deterministic processes with probabilistic ones, that would be insane for production data.
Here's what actually happens:
1. MCP exposes system schemas in a standardized way
2. AI analyzes the schemas and suggests mappings
3. Engineers review and validate every mapping
4. AI generates deterministic integration code (think: writing the SQL, not running it)
5. We test with real data before any production deployment
We used to spend 40–80 hours writing and maintaining brittle ETL code for every integration. Now we spend 4–8 hours deploying MCP (Model Context Protocol) interfaces and letting AI handle the rest. No hardcoded pipelines.
As others have mentioned this is an extremely odd thing to expect to work....
I'll give an example. I worked for a FTSE 100 company using a very old Product Lifecycle Management system (model manager - based actually on pre-DOS technology)....we had to upgrade it to a new fancy one.
Therefore we had to migrate all data relating to the company, and group companies engineering designs...everything to do with 2D drawings, 3D designs...any important connections etc....all electrical designs....excel sheets related to these containing lists of PCBs and their component parts in Bills Of Materials etc...There is absolutely no way in hell I would trust AI with almost any of that, to get it right....or even to attempt a load without almost immediately erroring.
I have asked AI on multiple occasions to take items from some input and output a table, or a json structure and every time it has simply skipped or ignored several items from the input for no reason.
This sounds like a terrible idea, and nearly impossible to debug when it inevitably drops data.
You naively replaced deterministic process w probabilistic process - following a trend that is uneducated.
I am taking screenshots of blogposts like this for a museum exhibit opening next year - lmk if you’re willing.
We're not replacing deterministic processes with probabilistic ones, that would be insane for production data.
Here's what actually happens:
1. MCP exposes system schemas in a standardized way 2. AI analyzes the schemas and suggests mappings 3. Engineers review and validate every mapping 4. AI generates deterministic integration code (think: writing the SQL, not running it) 5. We test with real data before any production deployment
That's a bold move. Hopefully there are no stray cats.
We used to spend 40–80 hours writing and maintaining brittle ETL code for every integration. Now we spend 4–8 hours deploying MCP (Model Context Protocol) interfaces and letting AI handle the rest. No hardcoded pipelines.
Can you give some more info on the results?
Meaning, correctness, completeness, etc...
Would you use it for e.g. tax information? Because if wrong, you could get fined.