HDFS Registry
Description
HDFS registry provides support for storing the protobuf representation of your feature store objects (data sources, feature views, feature services, etc.) in Hadoop Distributed File System (HDFS).
While it can be used in production, there are still inherent limitations with a file-based registries, since changing a single field in the registry requires re-writing the whole registry file. With multiple concurrent writers, this presents a risk of data loss, or bottlenecks writes to the registry since all changes have to be serialized (e.g. when running materialization for multiple feature views or time ranges concurrently).
Pre-requisites
The HDFS registry requires Hadoop 3.3+ to be installed and the HADOOP_HOME environment variable set.
Authentication and User Configuration
The HDFS registry is using pyarrow.fs.HadoopFileSystem and does not support specifying HDFS users or Kerberos credentials directly in the feature_store.yaml configuration. It relies entirely on the Hadoop and system environment configuration available to the process running Feast.
By default, pyarrow.fs.HadoopFileSystem inherits authentication from the underlying Hadoop client libraries and environment variables, such as:
HADOOP_USER_NAMEKRB5CCNAMEhadoop.security.authentication- Any other relevant properties in
core-site.xmlandhdfs-site.xml
For more information, refer to:
Example
An example of how to configure this would be:
{% code title="feature_store.yaml" %}
project: feast_hdfs registry: path: hdfs://[YOUR NAMENODE HOST]:[YOUR NAMENODE PORT]/[PATH TO REGISTRY]/registry.pb cache_ttl_seconds: 60 online_store: null offline_store: null
{% endcode %}