Update CloudronManifest.json and documentation files
This commit is contained in:
parent
3743e382b7
commit
cc6a8d193a
@ -25,7 +25,7 @@
|
||||
],
|
||||
"minBoxVersion": "7.0.0",
|
||||
"memoryLimit": 4294967296,
|
||||
"postInstallMessage": "Elasticsearch is now installed and available for internal use only. You can access it using http://localhost:9200 from within other Cloudron apps.\n\nUsername: elastic\nPassword: A secure random password has been generated and stored in /app/data/credentials.txt. You can check it in the app logs or by accessing the app container.",
|
||||
"postInstallMessage": "Elasticsearch is now installed and available for internal use only.\n\nConnection Information:\n- REST API: ```http://localhost:9200``` (from within other Cloudron apps)\n- Transport Protocol: port ```9300``` (for Elasticsearch clients)\n\nAuthentication:\n- Username: ```elastic```\n- Password: A secure random password has been generated and stored in ``/app/data/credentials.txt```.\n\nThe password can be retrieved from:\n1. The app logs\n2. By accessing the app container\n3. The file at /app/data/credentials.txt\n\nNote: SSL is disabled for HTTP connections for compatibility with most client applications.",
|
||||
"multiDomain": false,
|
||||
"tcpPorts": {
|
||||
"9300": {
|
||||
|
139
INSTALL.md
139
INSTALL.md
@ -30,8 +30,8 @@ This guide explains how to install the Elasticsearch package on your Cloudron in
|
||||
After installation:
|
||||
|
||||
1. Check the app logs to ensure Elasticsearch has started correctly
|
||||
2. Update the default password in the app's environment settings
|
||||
3. Configure your other Cloudron apps to connect to Elasticsearch using `localhost:9200`
|
||||
2. Note the generated password from the logs or from `/app/data/credentials.txt`
|
||||
3. Configure your other Cloudron apps to connect to Elasticsearch using the format: `http://elastic:<password>@localhost:9200`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@ -41,4 +41,137 @@ After installation:
|
||||
|
||||
## Support
|
||||
|
||||
For support, please create an issue on the package's GitHub repository or contact the package maintainer.
|
||||
For support, please create an issue on the package's GitHub repository or contact the package maintainer.
|
||||
|
||||
## Integration with Nextcloud
|
||||
|
||||
If you want to use this Elasticsearch package with Nextcloud's Full-Text Search functionality, follow these steps:
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. Ensure you have the following Nextcloud apps installed:
|
||||
- Full-Text Search
|
||||
- Full-Text Search - Elasticsearch Platform
|
||||
- Any content provider app (e.g., Full-Text Search - Files)
|
||||
|
||||
### Configuration Steps
|
||||
|
||||
#### 1. Find Your Elasticsearch Password
|
||||
|
||||
Check the Elasticsearch app logs to find the generated password:
|
||||
|
||||
```bash
|
||||
# From the Cloudron dashboard, view the Elasticsearch app logs
|
||||
# Look for lines containing "Password: " after Elasticsearch has started
|
||||
# Or check the file at /app/data/credentials.txt within the container
|
||||
```
|
||||
|
||||
#### 2. Create Elasticsearch Index (Optional)
|
||||
|
||||
You can manually create an Elasticsearch index using curl:
|
||||
|
||||
```bash
|
||||
# Connect using localhost (from within another Cloudron app)
|
||||
curl -X PUT "http://elastic:<your-password>@localhost:9200/nextcloud"
|
||||
```
|
||||
|
||||
This creates an index named "nextcloud" that will be used for Nextcloud integration.
|
||||
|
||||
For general purposes, you can create other indices as needed:
|
||||
|
||||
```bash
|
||||
# Create a general-purpose index named "cloud"
|
||||
curl -X PUT "http://elastic:<your-password>@localhost:9200/cloud"
|
||||
```
|
||||
|
||||
You can verify the indices were created successfully with:
|
||||
|
||||
```bash
|
||||
curl "http://elastic:<your-password>@localhost:9200/_cat/indices"
|
||||
```
|
||||
|
||||
Alternatively, you can let Nextcloud create and configure the index automatically using the OCC command:
|
||||
|
||||
```bash
|
||||
cd /app/code
|
||||
php occ fulltextsearch_elasticsearch:configure '{"elastic_host":"http://elastic:<your-password>@localhost:9200","elastic_index":"nextcloud"}'
|
||||
```
|
||||
|
||||
#### 3. Configure Nextcloud
|
||||
|
||||
Access the Nextcloud CLI:
|
||||
|
||||
```bash
|
||||
# From the Cloudron dashboard, click on your Nextcloud app
|
||||
# Navigate to "Settings" → "Terminal" to access the CLI
|
||||
# Or use the Cloudron CLI command: cloudron exec --app your-nextcloud-app
|
||||
```
|
||||
|
||||
Run the following commands in the Nextcloud CLI:
|
||||
|
||||
```bash
|
||||
# Using the OCC command (preferred method)
|
||||
cd /app/code
|
||||
php occ config:app:set fulltextsearch_elasticsearch allow_self_signed_cert --value=false
|
||||
php occ config:app:set fulltextsearch_elasticsearch elastic_ssl --value=false
|
||||
```
|
||||
|
||||
#### 4. Configure the Elasticsearch Connection
|
||||
|
||||
Still in the Nextcloud CLI, configure the Elasticsearch connection:
|
||||
|
||||
```bash
|
||||
php occ fulltextsearch_elasticsearch:configure '{"elastic_host":"http://elastic:<your-password>@localhost:9200","elastic_index":"nextcloud"}'
|
||||
```
|
||||
|
||||
Replace `<your-password>` with the password you found in step 1.
|
||||
|
||||
#### 5. Initialize the Index
|
||||
|
||||
Create and initialize the Elasticsearch index:
|
||||
|
||||
```bash
|
||||
php occ fulltextsearch:index
|
||||
```
|
||||
|
||||
For a full reindex of all content, use:
|
||||
|
||||
```bash
|
||||
php occ fulltextsearch:index -f
|
||||
```
|
||||
|
||||
#### 6. Test the Configuration
|
||||
|
||||
Verify that the connection works:
|
||||
|
||||
```bash
|
||||
php occ fulltextsearch:test
|
||||
```
|
||||
|
||||
### Authentication Details
|
||||
|
||||
Use the following credentials for Elasticsearch:
|
||||
- Username: `elastic`
|
||||
- Password: The generated password from `/app/data/credentials.txt`
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
If you encounter connection errors:
|
||||
- Ensure you're using port 9200 (not 9300)
|
||||
- Use HTTP instead of HTTPS in the connection URL
|
||||
- In Cloudron, use `localhost` instead of IP addresses for app-to-app communication
|
||||
- Check that the Elasticsearch app is running
|
||||
|
||||
#### Testing Connectivity
|
||||
|
||||
You can test connectivity to Elasticsearch from another Cloudron app with:
|
||||
|
||||
```bash
|
||||
curl -v http://elastic:<your-password>@localhost:9200
|
||||
```
|
||||
|
||||
If successful, you should see a JSON response with Elasticsearch information.
|
||||
|
||||
#### Advanced Configuration
|
||||
|
||||
In some cases, you might need to modify additional Elasticsearch settings. You can do this via the elasticsearch.yml file, which is stored in `/app/data/config/elasticsearch.yml` within the Elasticsearch app container.
|
43
README.md
43
README.md
@ -4,10 +4,11 @@ This package provides Elasticsearch for Cloudron, configured for internal use on
|
||||
|
||||
## Features
|
||||
|
||||
- Elasticsearch 8.17.3 (configurable)
|
||||
- Elasticsearch 8.17.3
|
||||
- Single-node configuration optimized for Cloudron
|
||||
- Security enabled with basic authentication
|
||||
- Internal access only
|
||||
- Automatic optimization based on container resources
|
||||
|
||||
## Usage
|
||||
|
||||
@ -18,39 +19,53 @@ After installation, Elasticsearch will be available at the following URLs:
|
||||
|
||||
### Authentication
|
||||
|
||||
Default credentials:
|
||||
Authentication credentials:
|
||||
- Username: `elastic`
|
||||
- Password: `cloudron`
|
||||
- Password: A secure random password is generated during installation
|
||||
|
||||
It's recommended to change the default password after installation by updating the `.env` file and restarting the app.
|
||||
You can find the password in:
|
||||
1. The app logs after installation
|
||||
2. By accessing the app container
|
||||
3. The file at `/app/data/credentials.txt`
|
||||
|
||||
### Connection from other Cloudron apps
|
||||
|
||||
To connect to Elasticsearch from another Cloudron app, you can use the following connection details:
|
||||
To connect to Elasticsearch from another Cloudron app, use the following connection details:
|
||||
|
||||
```
|
||||
Host: localhost
|
||||
Port: 9200
|
||||
Protocol: http
|
||||
Username: elastic
|
||||
Password: <your password from .env>
|
||||
Password: <password from credentials.txt>
|
||||
```
|
||||
|
||||
## Configuration
|
||||
Example connection using cURL:
|
||||
|
||||
You can modify the configuration by editing the `.env` file in the app's data directory and restarting the app.
|
||||
```bash
|
||||
curl -u elastic:<password> http://localhost:9200
|
||||
```
|
||||
|
||||
Available configuration options:
|
||||
## Security Notes
|
||||
|
||||
- `ELASTIC_PASSWORD`: Password for the 'elastic' user
|
||||
- `STACK_VERSION`: Version of Elasticsearch to use
|
||||
- `CLUSTER_NAME`: Name of the Elasticsearch cluster
|
||||
- `LICENSE`: License type ('basic' or 'trial')
|
||||
- HTTP SSL is disabled for compatibility with most client applications
|
||||
- Transport protocol is secured with internal certificates
|
||||
- Authentication is required for all operations
|
||||
- All data is stored in the app's data directory
|
||||
|
||||
## Performance Configuration
|
||||
|
||||
The package automatically configures Elasticsearch based on the container's available resources:
|
||||
|
||||
- Java heap size is set to 50% of available memory
|
||||
- GC optimization for container environments
|
||||
- Index settings tuned for single-node operation
|
||||
|
||||
## Limitations
|
||||
|
||||
- This package is for internal use only and is not exposed to the web
|
||||
- It's configured as a single-node cluster for simplicity
|
||||
- Memory is limited to 1GB (configurable in CloudronManifest.json)
|
||||
- Memory usage scales with container limits set in Cloudron
|
||||
|
||||
## Support
|
||||
|
||||
|
303
instructions.txt
303
instructions.txt
@ -1,303 +0,0 @@
|
||||
Configure and start the cluster
|
||||
|
||||
edit
|
||||
Install Docker Compose. Visit the Docker Compose docs to install Docker Compose for your environment.
|
||||
|
||||
If you’re using Docker Desktop, Docker Compose is installed automatically. Make sure to allocate at least 4GB of memory to Docker Desktop. You can adjust memory usage in Docker Desktop by going to Settings > Resources.
|
||||
|
||||
Create or navigate to an empty directory for the project.
|
||||
Download and save the following files in the project directory:
|
||||
|
||||
.env
|
||||
docker-compose.yml
|
||||
In the .env file, specify a password for the ELASTIC_PASSWORD and KIBANA_PASSWORD variables.
|
||||
|
||||
The passwords must be alphanumeric and can’t contain special characters, such as ! or @. The bash script included in the docker-compose.yml file only works with alphanumeric characters. Example:
|
||||
|
||||
# Password for the 'elastic' user (at least 6 characters)
|
||||
ELASTIC_PASSWORD=changeme
|
||||
|
||||
# Password for the 'kibana_system' user (at least 6 characters)
|
||||
KIBANA_PASSWORD=changeme
|
||||
...
|
||||
In the .env file, set STACK_VERSION to the current Elastic Stack version.
|
||||
|
||||
...
|
||||
# Version of Elastic products
|
||||
STACK_VERSION=8.17.3
|
||||
...
|
||||
By default, the Docker Compose configuration exposes port 9200 on all network interfaces.
|
||||
|
||||
To avoid exposing port 9200 to external hosts, set ES_PORT to 127.0.0.1:9200 in the .env file. This ensures Elasticsearch is only accessible from the host machine.
|
||||
|
||||
...
|
||||
# Port to expose Elasticsearch HTTP API to the host
|
||||
#ES_PORT=9200
|
||||
ES_PORT=127.0.0.1:9200
|
||||
...
|
||||
To start the cluster, run the following command from the project directory.
|
||||
|
||||
docker-compose up -d
|
||||
After the cluster has started, open http://localhost:5601 in a web browser to access Kibana.
|
||||
Log in to Kibana as the elastic user using the ELASTIC_PASSWORD you set earlier.
|
||||
Stop and remove the cluster
|
||||
|
||||
edit
|
||||
To stop the cluster, run docker-compose down. The data in the Docker volumes is preserved and loaded when you restart the cluster with docker-compose up.
|
||||
|
||||
docker-compose down
|
||||
To delete the network, containers, and volumes when you stop the cluster, specify the -v option:
|
||||
|
||||
docker-compose down -v
|
||||
Next steps
|
||||
|
||||
edit
|
||||
You now have a test Elasticsearch environment set up. Before you start serious development or go into production with Elasticsearch, review the requirements and recommendations to apply when running Elasticsearch in Docker in production.
|
||||
|
||||
Using the Docker images in production
|
||||
|
||||
edit
|
||||
The following requirements and recommendations apply when running Elasticsearch in Docker in production.
|
||||
|
||||
Set vm.max_map_count to at least 262144
|
||||
|
||||
edit
|
||||
The vm.max_map_count kernel setting must be set to at least 262144 for production use.
|
||||
|
||||
How you set vm.max_map_count depends on your platform.
|
||||
|
||||
Linux
|
||||
|
||||
edit
|
||||
To view the current value for the vm.max_map_count setting, run:
|
||||
|
||||
grep vm.max_map_count /etc/sysctl.conf
|
||||
vm.max_map_count=262144
|
||||
To apply the setting on a live system, run:
|
||||
|
||||
sysctl -w vm.max_map_count=262144
|
||||
To permanently change the value for the vm.max_map_count setting, update the value in /etc/sysctl.conf.
|
||||
|
||||
macOS with Docker for Mac
|
||||
|
||||
edit
|
||||
The vm.max_map_count setting must be set within the xhyve virtual machine:
|
||||
|
||||
From the command line, run:
|
||||
|
||||
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
|
||||
Press enter and use sysctl to configure vm.max_map_count:
|
||||
|
||||
sysctl -w vm.max_map_count=262144
|
||||
To exit the screen session, type Ctrl a d.
|
||||
Windows and macOS with Docker Desktop
|
||||
|
||||
edit
|
||||
The vm.max_map_count setting must be set via docker-machine:
|
||||
|
||||
docker-machine ssh
|
||||
sudo sysctl -w vm.max_map_count=262144
|
||||
Windows with Docker Desktop WSL 2 backend
|
||||
|
||||
edit
|
||||
The vm.max_map_count setting must be set in the "docker-desktop" WSL instance before the Elasticsearch container will properly start. There are several ways to do this, depending on your version of Windows and your version of WSL.
|
||||
|
||||
If you are on Windows 10 before version 22H2, or if you are on Windows 10 version 22H2 using the built-in version of WSL, you must either manually set it every time you restart Docker before starting your Elasticsearch container, or (if you do not wish to do so on every restart) you must globally set every WSL2 instance to have the vm.max_map_count changed. This is because these versions of WSL do not properly process the /etc/sysctl.conf file.
|
||||
|
||||
To manually set it every time you reboot, you must run the following commands in a command prompt or PowerShell window every time you restart Docker:
|
||||
|
||||
wsl -d docker-desktop -u root
|
||||
sysctl -w vm.max_map_count=262144
|
||||
If you are on these versions of WSL and you do not want to have to run those commands every time you restart Docker, you can globally change every WSL distribution with this setting by modifying your %USERPROFILE%\.wslconfig as follows:
|
||||
|
||||
[wsl2]
|
||||
kernelCommandLine = "sysctl.vm.max_map_count=262144"
|
||||
This will cause all WSL2 VMs to have that setting assigned when they start.
|
||||
|
||||
If you are on Windows 11, or Windows 10 version 22H2 and have installed the Microsoft Store version of WSL, you can modify the /etc/sysctl.conf within the "docker-desktop" WSL distribution, perhaps with commands like this:
|
||||
|
||||
wsl -d docker-desktop -u root
|
||||
vi /etc/sysctl.conf
|
||||
and appending a line which reads:
|
||||
|
||||
vm.max_map_count = 262144
|
||||
Configuration files must be readable by the elasticsearch user
|
||||
|
||||
edit
|
||||
By default, Elasticsearch runs inside the container as user elasticsearch using uid:gid 1000:0.
|
||||
|
||||
One exception is Openshift, which runs containers using an arbitrarily assigned user ID. Openshift presents persistent volumes with the gid set to 0, which works without any adjustments.
|
||||
If you are bind-mounting a local directory or file, it must be readable by the elasticsearch user. In addition, this user must have write access to the config, data and log dirs (Elasticsearch needs write access to the config directory so that it can generate a keystore). A good strategy is to grant group access to gid 0 for the local directory.
|
||||
|
||||
For example, to prepare a local directory for storing data through a bind-mount:
|
||||
|
||||
mkdir esdatadir
|
||||
chmod g+rwx esdatadir
|
||||
chgrp 0 esdatadir
|
||||
You can also run an Elasticsearch container using both a custom UID and GID. You must ensure that file permissions will not prevent Elasticsearch from executing. You can use one of two options:
|
||||
|
||||
Bind-mount the config, data and logs directories. If you intend to install plugins and prefer not to create a custom Docker image, you must also bind-mount the plugins directory.
|
||||
Pass the --group-add 0 command line option to docker run. This ensures that the user under which Elasticsearch is running is also a member of the root (GID 0) group inside the container.
|
||||
Increase ulimits for nofile and nproc
|
||||
|
||||
edit
|
||||
Increased ulimits for nofile and nproc must be available for the Elasticsearch containers. Verify the init system for the Docker daemon sets them to acceptable values.
|
||||
|
||||
To check the Docker daemon defaults for ulimits, run:
|
||||
|
||||
docker run --rm docker.elastic.co/elasticsearch/elasticsearch:8.17.3 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'
|
||||
If needed, adjust them in the Daemon or override them per container. For example, when using docker run, set:
|
||||
|
||||
--ulimit nofile=65535:65535
|
||||
Disable swapping
|
||||
|
||||
edit
|
||||
Swapping needs to be disabled for performance and node stability. For information about ways to do this, see Disable swapping.
|
||||
|
||||
If you opt for the bootstrap.memory_lock: true approach, you also need to define the memlock: true ulimit in the Docker Daemon, or explicitly set for the container as shown in the sample compose file. When using docker run, you can specify:
|
||||
|
||||
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1
|
||||
Randomize published ports
|
||||
|
||||
edit
|
||||
The image exposes TCP ports 9200 and 9300. For production clusters, randomizing the published ports with --publish-all is recommended, unless you are pinning one container per host.
|
||||
|
||||
Manually set the heap size
|
||||
|
||||
edit
|
||||
By default, Elasticsearch automatically sizes JVM heap based on a nodes’s roles and the total memory available to the node’s container. We recommend this default sizing for most production environments. If needed, you can override default sizing by manually setting JVM heap size.
|
||||
|
||||
To manually set the heap size in production, bind mount a JVM options file under /usr/share/elasticsearch/config/jvm.options.d that includes your desired heap size settings.
|
||||
|
||||
For testing, you can also manually set the heap size using the ES_JAVA_OPTS environment variable. For example, to use 1GB, use the following command.
|
||||
|
||||
docker run -e ES_JAVA_OPTS="-Xms1g -Xmx1g" -e ENROLLMENT_TOKEN="<token>" --name es01 -p 9200:9200 --net elastic -it docker.elastic.co/elasticsearch/elasticsearch:8.17.3
|
||||
The ES_JAVA_OPTS variable overrides all other JVM options. We do not recommend using ES_JAVA_OPTS in production.
|
||||
|
||||
Pin deployments to a specific image version
|
||||
|
||||
edit
|
||||
Pin your deployments to a specific version of the Elasticsearch Docker image. For example docker.elastic.co/elasticsearch/elasticsearch:8.17.3.
|
||||
|
||||
Always bind data volumes
|
||||
|
||||
edit
|
||||
You should use a volume bound on /usr/share/elasticsearch/data for the following reasons:
|
||||
|
||||
The data of your Elasticsearch node won’t be lost if the container is killed
|
||||
Elasticsearch is I/O sensitive and the Docker storage driver is not ideal for fast I/O
|
||||
It allows the use of advanced Docker volume plugins
|
||||
Avoid using loop-lvm mode
|
||||
|
||||
edit
|
||||
If you are using the devicemapper storage driver, do not use the default loop-lvm mode. Configure docker-engine to use direct-lvm.
|
||||
|
||||
Centralize your logs
|
||||
|
||||
edit
|
||||
Consider centralizing your logs by using a different logging driver. Also note that the default json-file logging driver is not ideally suited for production use.
|
||||
|
||||
Configuring Elasticsearch with Docker
|
||||
|
||||
edit
|
||||
When you run in Docker, the Elasticsearch configuration files are loaded from /usr/share/elasticsearch/config/.
|
||||
|
||||
To use custom configuration files, you bind-mount the files over the configuration files in the image.
|
||||
|
||||
You can set individual Elasticsearch configuration parameters using Docker environment variables. The sample compose file and the single-node example use this method. You can use the setting name directly as the environment variable name. If you cannot do this, for example because your orchestration platform forbids periods in environment variable names, then you can use an alternative style by converting the setting name as follows.
|
||||
|
||||
Change the setting name to uppercase
|
||||
Prefix it with ES_SETTING_
|
||||
Escape any underscores (_) by duplicating them
|
||||
Convert all periods (.) to underscores (_)
|
||||
For example, -e bootstrap.memory_lock=true becomes -e ES_SETTING_BOOTSTRAP_MEMORY__LOCK=true.
|
||||
|
||||
You can use the contents of a file to set the value of the ELASTIC_PASSWORD or KEYSTORE_PASSWORD environment variables, by suffixing the environment variable name with _FILE. This is useful for passing secrets such as passwords to Elasticsearch without specifying them directly.
|
||||
|
||||
For example, to set the Elasticsearch bootstrap password from a file, you can bind mount the file and set the ELASTIC_PASSWORD_FILE environment variable to the mount location. If you mount the password file to /run/secrets/bootstrapPassword.txt, specify:
|
||||
|
||||
-e ELASTIC_PASSWORD_FILE=/run/secrets/bootstrapPassword.txt
|
||||
You can override the default command for the image to pass Elasticsearch configuration parameters as command line options. For example:
|
||||
|
||||
docker run <various parameters> bin/elasticsearch -Ecluster.name=mynewclustername
|
||||
While bind-mounting your configuration files is usually the preferred method in production, you can also create a custom Docker image that contains your configuration.
|
||||
|
||||
Mounting Elasticsearch configuration files
|
||||
|
||||
edit
|
||||
Create custom config files and bind-mount them over the corresponding files in the Docker image. For example, to bind-mount custom_elasticsearch.yml with docker run, specify:
|
||||
|
||||
-v full_path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
|
||||
If you bind-mount a custom elasticsearch.yml file, ensure it includes the network.host: 0.0.0.0 setting. This setting ensures the node is reachable for HTTP and transport traffic, provided its ports are exposed. The Docker image’s built-in elasticsearch.yml file includes this setting by default.
|
||||
|
||||
The container runs Elasticsearch as user elasticsearch using uid:gid 1000:0. Bind mounted host directories and files must be accessible by this user, and the data and log directories must be writable by this user.
|
||||
Create an encrypted Elasticsearch keystore
|
||||
|
||||
edit
|
||||
By default, Elasticsearch will auto-generate a keystore file for secure settings. This file is obfuscated but not encrypted.
|
||||
|
||||
To encrypt your secure settings with a password and have them persist outside the container, use a docker run command to manually create the keystore instead. The command must:
|
||||
|
||||
Bind-mount the config directory. The command will create an elasticsearch.keystore file in this directory. To avoid errors, do not directly bind-mount the elasticsearch.keystore file.
|
||||
Use the elasticsearch-keystore tool with the create -p option. You’ll be prompted to enter a password for the keystore.
|
||||
For example:
|
||||
|
||||
docker run -it --rm \
|
||||
-v full_path_to/config:/usr/share/elasticsearch/config \
|
||||
docker.elastic.co/elasticsearch/elasticsearch:8.17.3 \
|
||||
bin/elasticsearch-keystore create -p
|
||||
You can also use a docker run command to add or update secure settings in the keystore. You’ll be prompted to enter the setting values. If the keystore is encrypted, you’ll also be prompted to enter the keystore password.
|
||||
|
||||
docker run -it --rm \
|
||||
-v full_path_to/config:/usr/share/elasticsearch/config \
|
||||
docker.elastic.co/elasticsearch/elasticsearch:8.17.3 \
|
||||
bin/elasticsearch-keystore \
|
||||
add my.secure.setting \
|
||||
my.other.secure.setting
|
||||
If you’ve already created the keystore and don’t need to update it, you can bind-mount the elasticsearch.keystore file directly. You can use the KEYSTORE_PASSWORD environment variable to provide the keystore password to the container at startup. For example, a docker run command might have the following options:
|
||||
|
||||
-v full_path_to/config/elasticsearch.keystore:/usr/share/elasticsearch/config/elasticsearch.keystore
|
||||
-e KEYSTORE_PASSWORD=mypassword
|
||||
Using custom Docker images
|
||||
|
||||
edit
|
||||
In some environments, it might make more sense to prepare a custom image that contains your configuration. A Dockerfile to achieve this might be as simple as:
|
||||
|
||||
FROM docker.elastic.co/elasticsearch/elasticsearch:8.17.3
|
||||
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
|
||||
You could then build and run the image with:
|
||||
|
||||
docker build --tag=elasticsearch-custom .
|
||||
docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom
|
||||
Some plugins require additional security permissions. You must explicitly accept them either by:
|
||||
|
||||
Attaching a tty when you run the Docker image and allowing the permissions when prompted.
|
||||
Inspecting the security permissions and accepting them (if appropriate) by adding the --batch flag to the plugin install command.
|
||||
See Plugin management for more information.
|
||||
|
||||
Troubleshoot Docker errors for Elasticsearch
|
||||
|
||||
edit
|
||||
Here’s how to resolve common errors when running Elasticsearch with Docker.
|
||||
|
||||
elasticsearch.keystore is a directory
|
||||
|
||||
edit
|
||||
Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: java.io.IOException: Is a directory: SimpleFSIndexInput(path="/usr/share/elasticsearch/config/elasticsearch.keystore") Likely root cause: java.io.IOException: Is a directory
|
||||
A keystore-related docker run command attempted to directly bind-mount an elasticsearch.keystore file that doesn’t exist. If you use the -v or --volume flag to mount a file that doesn’t exist, Docker instead creates a directory with the same name.
|
||||
|
||||
To resolve this error:
|
||||
|
||||
Delete the elasticsearch.keystore directory in the config directory.
|
||||
Update the -v or --volume flag to point to the config directory path rather than the keystore file’s path. For an example, see Create an encrypted Elasticsearch keystore.
|
||||
Retry the command.
|
||||
elasticsearch.keystore: Device or resource busy
|
||||
|
||||
edit
|
||||
Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy
|
||||
A docker run command attempted to update the keystore while directly bind-mounting the elasticsearch.keystore file. To update the keystore, the container requires access to other files in the config directory, such as keystore.tmp.
|
||||
|
||||
To resolve this error:
|
||||
|
||||
Update the -v or --volume flag to point to the config directory path rather than the keystore file’s path. For an example, see Create an encrypted Elasticsearch keystore.
|
||||
Retry the command.
|
@ -6,7 +6,7 @@ mkdir -p build
|
||||
rm -rf build/*
|
||||
|
||||
# Copy all files to the build directory
|
||||
cp -r CloudronManifest.json Dockerfile .env start.sh stop.sh README.md logo.svg build/
|
||||
cp -r CloudronManifest.json Dockerfile .env start.sh stop.sh README.md logo.png build/
|
||||
|
||||
# Create the package
|
||||
cd build
|
||||
|
Loading…
x
Reference in New Issue
Block a user