Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One area where I think AI would be super useful is interpreting enterprise data dictionaries and companion guides, for example:

https://www.cms.gov/files/document/cclf-file-data-elements-r...

Currently I have to write validations based off of that definition and then write code to transform it to another standardized claim format. The work is king of mind numbing and it seems like it would be possible to use AI to streamline the process.



If you have the desired standardized claim format, Lume supports this use case. We also have a pdf parser in the roadmap to parse documents exactly like the one you linked, to then transform and pipe the data accordingly.


How does Lume support this today without a pdf parser? Do you have the option to use a preexisting claim format or does the format have to be specified another way?


Our V1 supports json and csv formats for manual imports, and we’re quickly expanding to other formats (like pdf).

So, to clarify - Lume supports this today only if you provide the linked claim data in json or csv format, and in the near future will support direct pdf formats. All of our users so far provide custom data through their data warehouse, json, or csv.


Just to be clear, the pdf does not contain the data. The pdf contains the data dictionary that describes the structure of the data such as the type of field, whether it's required, etc... the actual claim data is sent in a csv.

The objective is be to parse the csv based on the data dictionary described in the pdf.


Gotcha! In that case, we do not yet support an end-to-end experience for this, but would be willing to prioritize building it for clients if we get strong demand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: