Saturday, January 21, 2023

Setting a frost alarm on Home assistant

One of the issues with winter is the possibility of ice covering your windscreen which needs to be cleared before you can drive. Clearing out the wind screen in such situations requires additional time in the morning. Or you can put a simple cover over the windscreen which prevents condensation and therefore the formation of ice. However this is not needed everyday - only on nights when the temperature goes to freezing. I wrote an automation in Home Assistant which checks the forecast and warns me if there is a chance of frost at night.

I use the MET office integration in Home Assistant for my forecasts. One of the entities available is a forecast in 3 hourly chunks for the next 4 days. To determine if there is a chance of frost at night, at 2100, I check the next 3 - 3hourly forecast temperatures and determine if the temperature goes below 3C. If yes, I send an announcement on my google home devices to warn occupants of a chance of frost. This is a good reminder to cover the windscreen to avoid ice on the wind screen the next morning.

To implement this in Homeassistant, I first create a template sensor with the following.

# Minimum temperature over the next 3 3hour forecasts
  - name: "Minimum Forecast Temperature"
    unique_id: "minimum_forecast_temperature"
    unit_of_measurement: '°C'
    state: >-
      {% set mylist = namespace(temps=[]) %}
      {% for s in state_attr('weather.met_office_home_3_hourly', 'forecast')[0:3] -%}
      {% set mylist.temps = mylist.temps + [s.temperature] %}
      {%- endfor %}
      {{ mylist.temps|min }}
Here, weather.met_office_home_3_hourly is the sensor provided by the MET office integration. We go through the list of forecasts for the next 3 - 3 hour chunks and report the minimum temperature.

The template sensor, sensor.minimum_forecast_temperature now returns the minimum temperature for the next 9 hours. 

You can now set an automation to check if the temperature dips below a pre-set temperature and make the announcement.

alias: "Door: Check for frost"
description: ""
trigger:
  - platform: time
    at: "21:00:00"
condition:
  - condition: numeric_state
    entity_id: sensor.minimum_forecast_temperature
    below: 3
action:
  - service: tts.google_say
    data:
      entity_id: >-
        media_player.dining_speaker
      message: >-
        Minimum temperature falling below freezing. Expect frost tomorrow
        morning.
mode: single

Thursday, October 14, 2021

Using Octopus Energy data in Home Assistant

We have a SMETS2 smart energy meter installed at our property. Energy consumption data is updated at a central location(DCC) using a mobile network. This data is then accessed by my energy provider the next day and my energy consumption is calculated.

My current energy provider is Octopus Energy who kindly provide an API to fetch your power consumption data they obtain through the smart-meter and DCC. I have been looking forward to using this data to download this consumption data into Home Assistant.

There are several ways in which this is done. The instructions below describe how I access the data from my account at Octopus Energy.

Step 1:

Obtain the curl commands required to access the json output of the consumption data from the following URL. Note that the commands below are copied over from my account with my details blanked out and hence will not work cut-pasted as is.

For your Electricity data, the curl command is of the form

curl -u "sk_live_KEYKEYKEYKEYKEYKEY:" "https://api.octopus.energy/v1/electricity-meter-points/11111111111111/meters/2222222222/consumption/"

For Gas usage, it is of the form

curl -u "sk_live_KEYKEYKEYKEYKEYKEY:" "https://api.octopus.energy/v1/gas-meter-points/33333333333/meters/E44444444444444/consumption/"

You will next need to modify the URLs in the curl commands above to obtain the consolidated data for the day by appending the string "?group_by=day" to the urls. 

So the commands above now modified are

curl -u "sk_live_KEYKEYKEYKEYKEYKEY:" "https://api.octopus.energy/v1/electricity-meter-points/11111111111111/meters/2222222222/consumption/?group_by=day"

curl -u "sk_live_KEYKEYKEYKEYKEYKEY:" "https://api.octopus.energy/v1/gas-meter-points/33333333333/meters/E44444444444444/consumption/?group_by=day"

for electricity and gas respectively. Make a note of these commands. You can test these immediately on your desktop which has curl available.

Step 2:

You now need to modify the sensors configuration for your homeassistant. My homeassistant configuration.yaml file has the following line

sensor: !include sensor.yaml

which lets me split out the sensors configuration to a separate sensors.yaml file. This means you will need to modify the configuration below by making a slight change in the indentation if you wish to add the configuration to configuration.yaml instead.

I first create two command line sensors to fetch the data from Octopus energy using the curl commands we obtained in Step 1.

- platform: command_line
  name: oe_electricity
  scan_interval: 86400
  value_template: '{{ value_json.count }}'
  json_attributes:
    - results
  command: >-
    curl -u "sk_live_KEYKEYKEYKEYKEYKEY:" "https://api.octopus.energy/v1/electricity-meter-points/11111111111111/meters/2222222222/consumption/?group_by=day"

- platform: command_line
  name: oe_gas
  scan_interval: 86400
  value_template: '{{ value_json.count }}'
  json_attributes:
    - results
  command: >-
    url -u "sk_live_KEYKEYKEYKEYKEYKEY:" "https://api.octopus.energy/v1/gas-meter-points/33333333333/meters/E44444444444444/consumption/?group_by=day"
These create two sensors sensor.oe_electricity and sensor.oe_gas which contain the json with the results from the output returned.

These have scan intervals of 1 day so that we do not end up hammering the Octopus Energy website with our requests. This data only changes once a day.

We then create two more sensors which read from that returned data.

- platform: template
  sensors:
    oe_electricity_yesterday:
      friendly_name: "Electricity Usage Yesterday"
      unit_of_measurement: 'kWh'
      value_template: "{{ state_attr('sensor.oe_electricity', 'results')[1]['consumption'] }}"
      device_class: energy
    oe_gas_yesterday:
      friendly_name: "Gas Usage Yesterday"
      unit_of_measurement: 'm³'
      value_template: "{{ state_attr('sensor.oe_gas', 'results')[1]['consumption'] }}"
      device_class: gas
This in turn creates 2 more sensors, sensor.oe_electricity_yesterday and sensor.oe_gas_yesterday with the data for the energy consumption from the previous day.

Restart Home Assistant to make sure the changes are effective and the sensors are available.

Step 3:

Add an automation to fetch the data at mid-day every day. From past experience, the electricity data seems to be updated at 0830 and the gas data updated at 1100 hours. I therefore run an automation at 1200 hours everyday to fetch the data for the previous day.I use the "Call service" -> "Home Assistant Core Integration: Update entity" to fetch the latest data. The yaml which describes this automation is as follows.

alias: 'Energy: Update OE readings'
description: ''
trigger:
  - platform: time
    at: '12:00'
condition: []
action:
  - service: homeassistant.update_entity
    target:
      entity_id:
        - sensor.oe_gas
        - sensor.oe_electricity
mode: single
Step 4:

Create Lovelace cards.

The yaml which describes this card is

type: entities
entities:
  - entity: sensor.oe_electricity_yesterday
  - entity: sensor.oe_gas_yesterday



The yaml for this card is

type: vertical-stack
cards:
  - type: sensor
    entity: sensor.oe_electricity_yesterday
    graph: line
    hours_to_show: 168
    name: Electricity Usage for the week
  - type: sensor
    entity: sensor.oe_gas_yesterday
    graph: line
    hours_to_show: 168
    name: Gas usage for the week
    detail: 1
There is a second option available to fetch data directly off the smart meter. This involves the purchase of a "Consumer Access Device" which can directly talk to the smart meter. I have already received such an instrument and will be investigating that next.


Monday, August 30, 2021

Energy Dashboard in Home Assistant: Tracking my electricity consumption


 With the release of Home Assistant Core 2021.8, a new feature called Home Energy Management was added to Home Assistant. This provides a nice dashboard where you can track the energy consumption of your house.

I have been interested in capturing my energy consumption data for a while now and have documented my use of an energy consumption meter before. That project was abandoned in due course because I didn't spend any time in building a nice dashboard.

 Since then, my electricity and gas meter has also been upgraded to a  smart meter and I can read the energy consumption on my energy provider, Octopus energy's website. The data is however delayed by a day. It is possible to obtain the latest energy consumption data but it involves third party access to your energy consumption data which I wasn't very keen on. So I have been thinking of capturing the electricity consumption data in some manner and decided to use the energy consumption meter clamp and my RTL-SDR usb dongle.

The post documents my setup.

My Energy Consumption Dashboard

Hardware used: 

1) Efergy Elite Classic:


I just use the transmitter. The clamp goes onto the live wire feeding into the energy meter. The device sends the current flowing through the wire in A every few seconds over the 433.55 MHz frequency.

2) RTL-SDR Realtek RTL2832U + R820T tuner receiver dongle




The RTL=SDR dongle allows us to capture the data being sent over 433.55 MHz frequency.


The dongle is connected to a Raspberry-Pi and is continuously listening to data being sent by the transmitter. To do this, I used the tool rtl_433 available on github. This is run on a screen session on the raspberry pi.




rtl_433 -f 433550000 -R36 -Fmqtt://192.168.1.10:1883,user=mqtt,pass=mqtt

  • The arg -f 433550000 is used to set the frequency to listen on.
  • The arg -R36 sets the decoder to use. This is specific to this transmitter.
  • -Fmqtt://192.168.1.6:1883,user=mqtt,pass=mqtt sets the mqtt server to send the data on.

The command captures the data and send the current consumption in Amperes to the mqtt server which is also my Home Asssitant server.

 The next steps are all on my Home Assistant Server.

I have split my home assistant configuration file so that the sensors configurations is tracked in a separate file. Similarly, I have also split up the configuration for the new Utility Meter integration. I do this by having the following configuration in the configuration.yaml file.

sensor: !include sensor.yaml
utility_meter: !include utility_meter.yaml

If unlike this, you define the sensors platform and the new utility meter integration within the configuration.yaml file itself, then you will have to modify the yaml configuration slightly.

As my first step, I need to create a sensor based on mqtt to read the incoming data in Amperes sent by the rtl_433 tool over mqtt and convert it into Watts for further consumption my homeassistant. My sensor.yaml file contains the following block.
- platform: mqtt
  name: "Home Electricity Watts"
  state_topic: "rtl_433/pi-hole/devices/Efergy-e2CT/42173/current"
  unit_of_measurement: "W"
  value_template: "{{ value | round(2) * 240 }}"
  device_class: energy
Here, the topic is determined by connecting to the mqtt server and subscribing to the topic rtl_433/#. This shows the various topics being posted to by the rtl_433 tool which can be subscribed to. For my specific transmitter, the topic used is rtl_433/pi-hole/devices/Efergy-e2CT/42173 and within this, the topic we are interested in is "current". So accordingly the state_topic used for the mqtt server is "rtl_433/pi-hole/devices/Efergy-e2CT/42173/current"

I also use value_template to multiply the current(I Amps) with the household voltage in the UK(240V) to obtain the energy consumption in Watts. This is exposed on the device sensor.home_electricity_watts.

However this value is just the value of power consumed at that specific time in Watts. To obtain the energy used in kW-hr, we need to multiply the power value with the timeperiod over which it was measured. The kWhr values for each time period need to be summed up to obtain the energy consumption over an hour. We do this using the Integration - Riemann sum integral. in my sensors.yaml file, I have the following yaml block which does this.

- platform: integration                               
  name: home_electricity_kWh
  source: sensor.home_electricity_watts
  round: 3
  method: left
  unit_prefix: k
This block instructs Home Assistant to read data from sensor.home_electricity_watts and use the iteration function to obtain the energy consumption in kWhr. This is then exposed through the device sensor.home_electricity_kWh.

Now we are ready to create a utility meter device. These for me are defined in utility_meter.yaml.

daily_electricity:
  name: "Daily Electricity Usage"
  source: sensor.home_electricity_kWh
  cycle: daily
  tariffs:
    - peak
    - offpeak
weekly_electricity:
  name: "Weekly Electricity Usage"
  source: sensor.home_electricity_kWh
  cycle: weekly
  tariffs:
    - peak
    - offpeak
monthly_electricity:
  name: "Monthly Electricity Usage"
  source: sensor.home_electricity_kWh
  cycle: monthly
  tariffs:
    - peak
    - offpeak
These three devices are similar and only differ in the cycles used ie. daily, weekly and monthly. At the moment, I only make use of the daily_electricity utility_meter. The consume the energy consumption from sensor.home_electricity_kWh and create a utility_meter which can be used by the Energy dashboard.

Since I use the Octopus Go tariff with Octopus energy, I have two different tariff rates depending on the time of the day. I name these tariffs as peak and offpeak. The tariff costs are set in the energy dashboard. These create multiple devices based on the utlity_meter name and tariff. The ones we are interested in are sensor.daily_electricity_peak and sensor.daily_electricity_offpeak.

To add these utility meters to the energy dashboard. Go to Configuration->Energy. Click on Add Consumption. Select "daily_electricity peak", select "Use a static price" and enter your tariff for peak consumption. For Octopus energy go, I pay 0.1533 GBP/kWh. Similarly, add the consumption for the offpeak periods with it's own tariff(0.05 GBP/kWh).

I have also added individual devices to track their energy consumption. This is done using Wifi smart power plugs which also capture energy consumption.

To complete setup, we need to setup automation to select the right tariff. Unlike the examples given on the Home Assistant page, I use two separate automations. These are

alias: 'Energy: Switch Electricity Tariff - peak'
description: ''
trigger:
  - platform: time
    at: '04:30:00'
condition: []
action:
  - service: utility_meter.select_tariff
    target:
      entity_id: utility_meter.daily_electricity
    data:
      tariff: peak
  - service: utility_meter.select_tariff
    target:
      entity_id: utility_meter.weekly_electricity
    data:
      tariff: peak
  - service: utility_meter.select_tariff
    data:
      tariff: peak
    target:
      entity_id: utility_meter.monthly_electricity
mode: single
and
alias: 'Energy: Switch Electricity Tariff - offpeak'
trigger:
  - platform: time
    at: '00:30:00'
action:
  - service: utility_meter.select_tariff
    target:
      entity_id: utility_meter.daily_electricity
    data:
      tariff: offpeak
  - service: utility_meter.select_tariff
    target:
      entity_id: utility_meter.weekly_electricity
    data:
      tariff: offpeak
  - service: utility_meter.select_tariff
    data:
      tariff: offpeak
    target:
      entity_id: utility_meter.monthly_electricity
mode: single
Which uses the call service method to call utlity_meter.select_tariff to select peak/offpeak tariff for each of the utility meters.


This will also expose new devices

  • sensor.daily_electricity_peak - Consumption in peak hours
  • sensor.daily_electricity_offpeak - Consumption in offpeak hours
  • sensor.daily_electricity_peak_cost - Costs during peak hours
  • sensor.daily_electricity_offpeak_cost - Costs for offpeak hours.


You may have to let the system run for a whole cycle before all these devices are visible.

And finally, you can also add a card to my dashboard to track daily electricity costs.



This is specified by the following yaml block.

type: entities
entities:
  - entity: sensor.home_electricity_watts
    name: Current Consumption
  - entity: sensor.daily_electricity_costs
    name: Daily Cost
  - entity: sensor.daily_electricity_peak
    name: Peak Consumption
  - entity: sensor.daily_electricity_peak_cost
    name: Peak Cost
  - entity: sensor.daily_electricity_offpeak
    name: Offpeak Consumption
  - entity: sensor.daily_electricity_offpeak_cost
    name: Offpeak Cost
title: Daily Electricity Costs

The device sensor.daily_electricity_costs is defined by me in my sensor.yaml file with the following entry

- platform: template
  sensors:
    daily_electricity_costs:
      friendly_name: "Energy Costs"
      unit_of_measurement: ".."
      value_template: "{{ (states('sensor.daily_electricity_peak') | float * 0.1533 + states('sensor.daily_electricity_offpeak') | float * 0.05) | round(2) }}"

I have had this system running for the past few days with various settings before finally settling on this setup.
 
The use of the energy consumption meter gives a pretty good approximation of the actual energy consumed. For example according to the energy dashboard, I consumed 8.1 kWhr while the data provided by the energy provider shows that I consumed 7.98 kWhr for the same day. 

I am still on the lookout to capture the actual data from the smart meter itself. Once this is available to me, I can simply change the source of the data consumed and continue with the rest of the setup.

I hope my setup is of use to others looking to setup their energy dashboard on Home Assistant. My thanks to the Home Assistant team for their development efforts which has made set up and management my smart home systems easy and accessible to the rest of us.


Monday, May 24, 2021

Monitoring Dryer using Home Assistant

We have a dryer in the garage which uses a sensor to determine if the clothes have been dried to the specified setting before deciding to continue drying or stop drying. Since the length of drying depends on the load size and type, it is difficult to predict how much time this takes. To prevent creasing, we prefer putting our clothes on hangers as soon as the drying is done. This means that we may have to make multiple trips to the garage to see if the dryer is done. This is a pain and we have had several occasions where we forgot to empty the dryer as soon as it was done. We needed a way to alert us in the house when the dryer is done and the load is ready.
 
I am already running a Home Assistant solution at home for automating various tasks. Along with various lights, cameras, trigger buttons and sensors, we also extensively use wifi plugs running Tasmota to switch on/off various electric devices at home. The wifi plugs include an energy monitor which let me know the amount of current or the power consumed by the device being served by the plug. I used this feature to implement my automation.
 
The dryer consumes approximately 2000 W of electricity when actively drying clothes. It consumes barely a Watt of energy when it is done with drying - this can be a short pause while it uses the sensor to determine the humidity level within the drum or a much longer wait after it is done drying and starts beeping to indicate that the drying is done.
 
We first translate this power consumption to a dryer 'mode'. To achieve this, I use a template sensor in home assistant.
 
For ease of maintenance, I have separated out my template sensors configuration. I do this my adding the following line to config/configuration.yaml 

sensor: !include sensor.yaml


 
ie. I ask home assistant to include the config/sensor.yaml file into the configuration.

My config/sensor.yaml file contains the following entries for the dryer sensor.

- platform: template
  sensors:
    dryer:
      friendly_name: "Dryer"
      value_template: >-
        {% if states('sensor.gosund8_energy_power')|int > 10 %}
          On
        {% elif states('sensor.gosund8_energy_power')|int > 0 %}
          StandBy
        {% else %}
          Off
        {% endif %}

In my case, I can read the power consumption of my wifi plug at "sensor.gosund8_energy_power". To determine this entity name, lookup the device under Configuration->Devices. Look for "DEVICE_NAME ENERGY Power". Click on it and read the entity id. My device name is Gosund8, so I lookup the device and click on "Gosund8 ENERGY Power" which has entity id: sensor.gosund8_energy_power.
 
The new template sensor above adds a template sensor dryer. Here I use an ifcondition here which checks the value of sensor.gosund8_energy_power. If the value is above 10, return On. If not above 10 but value above 0, it means that the dryer is not actively drying - return StandBy. Else report that the dryer is not running. This includes the condition where the device has been switched off at the plug and is no longer available.
 
I am now able to read the dryer mode at the entity "sensor.dryer". The value for this entity is set to On, StandBy, Off as mentioned above according to the criteria we set.
 
I can also read this sensor value on my dashboard where I have the following entry. 

  - type: button
    tap_action:
      action: none
    entity: sensor.dryer
    show_state: true
    hold_action:
      action: none

 


 


Finally I have an automation set up which is trigerred when the state of this entity transitions from On to StandBy and has been at StandBy for 5 minutes. To do this, I use the Trigger Type -> State when setting the trigger for the automation.
 
The yaml below captures the settings I use. 

- id: '1613076004832'
  alias: 'Dryer: Check if done'
  description: ''
  trigger:
  - platform: state
    entity_id: sensor.dryer
    from: 'On'
    to: StandBy
    for: 00:05:00
  condition: []
  action:
  - service: media_player.volume_set
    data:
      volume_level: 0.5
    target:
      entity_id:
      - media_player.living_room_speaker
  - service: tts.google_say
    data:
      entity_id:
      - media_player.living_room_speaker
      message: Dryer Done.  Move the Laundry
      cache: true
  mode: single

 
In my house, we have google home setup. I use this to give us an indication when the drying cycle is completed. I simply read out a message on my google home in my living room indicating that the dryer is done and the laundry can be picked. You can also your other actions such as switching a particular light on, sending your phone app a notification, etc.


Thursday, April 01, 2021

Minikube on Fedora 33

 I have been using minikube with the kvm2 driver to run a test kubernetes environment which I use for my tests. In here I document the steps I took to install minikube on my Fedora machine.

Minikube with kvm is intended to be run by a non-privileged user. In my case, I am use a user which is part of the libvirt group to allow the user to create vms.

Download and install the latest minikube package. This has to be run as root or a user with sudo access.

$ sudo dnf install https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm


Start up minikube. The command below downloads a kvm image which it then uses to create a virtual machine called minikube. You can see this running by calling 'virsh list' as a root user.
$ minikube start --driver=kvm2
😄  minikube v1.18.1 on Fedora 33
✨  Using the kvm2 driver based on user configuration
💾  Downloading driver docker-machine-driver-kvm2:
    > docker-machine-driver-kvm2....: 65 B / 65 B [----------] 100.00% ? p/s 0s
    > docker-machine-driver-kvm2: 11.39 MiB / 11.39 MiB  100.00% 28.09 MiB p/s
💿  Downloading VM boot image ...
    > minikube-v1.18.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.18.0.iso: 212.99 MiB / 212.99 MiB [] 100.00% 37.23 MiB p/s 6s
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.20.2 preload ...
    > preloaded-images-k8s-v9-v1....: 491.22 MiB / 491.22 MiB  100.00% 39.17 Mi
🔥  Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4
🌟  Enabled addons: default-storageclass, storage-provisioner
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default


Instead of passing the option --driver=kvm2, you can also set kvm2 to be the default driver.
$ minikube config set driver kvm2


To check status of minikube,
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
timeToStop: Nonexistent



Before we can use minikube, we need to install the kubectl utility to access the kubernetes cluster.
As root or a user with sudo access, install package kubernetes-client.
$ sudo dnf install kubernetes-client


kubectl uses the config file under ~/.kube/config

To check if minikube is setup correctly, you can check the version of the client and the server with the command
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"archive", BuildDate:"2020-07-28T00:00:00Z", GoVersion:"go1.15rc1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

 

Thursday, June 06, 2019

Howto: CIFS kerberos mount

Steps

1) I use a windows server is available with an AD configured. A samba server with kerberos configured can be used too.

2) Setup /etc/krb5.conf. My test machines use the following.
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log


[libdefaults]
default_realm = ENG1.GSSLAB.FAB.REDHAT.COM
dns_lookup_realm = true
dns_lookup_kdc = true
allow_weak_crypto = 1


[realms]
ENG1.GSSLAB.FAB.REDHAT.COM = {
  kdc = vm140-52.eng1.gsslab.fab.redhat.com:88
}


[domain_realm]
.eng1.gsslab.fab.redhat.com = ENG1.GSSLAB.FAB.REDHAT.COM
eng1.gsslab.fab.redhat.com = ENG1.GSSLAB.FAB.REDHAT.COM
 3) Edit /etc/request-key.conf and add the following 2 lines(Read man cifs.upcall)

create      cifs.spnego    * * /usr/sbin/cifs.upcall %k
create      dns_resolver   * * /usr/sbin/cifs.upcall %k
4)  As root user, init with a AD users credentials
# kinit wintest2
Password for
wintest2@ENG1.GSSLAB.FAB.REDHAT.COM:
5)  Now mount using the multiuser option to allow multiple users who have authenticated with their own credentials to log in.

 # mount -t cifs -o sec=krb5,sign,multiuser vm140-52.eng1.gsslab.fab.redhat.com:/exports /mnt
The multiuser mount option allows a single cifs mount to be used by multiple users using their own credentials. An example is a cifs mount which contains the user's home directories. Instead of individually mounting each user's home directory as they log in, the root user on the client machine can mount the exported homes share under /home. As users login, they access their cifs mounted home directory using their own credentials.  A new session is setup each time a new user accesses the share and this session is subsequently used for the user when accessing the share.

Friday, May 17, 2019

Samba multichannel - Connecting to an existing channel

 We investigate how a new channel is added to an existing channel on a multichannel connection.

We first need to familiarise ourselves on how a new incoming connection is handled.
http://sprabhu.blogspot.com/2018/03/samba-handling-new-connections.html

To summarise how a new connection is created

a) From the main thread, we call main()->open_sockets_smbd()->smbd_open_one_socket()->tevent_add_fd() to set a tevent handler to call smbd_accept_connection() whenever a new connection is opened with the samba server.

b) For a new connection coming in, the server calls smbd_accept_connection() which forks a child process and calls smbd_process() in the child.

c) Within smbd_process() a new client(struct smbXsrv_client) and a new xconn(struct smbXsrv_connection) are created. The xconn itself is added to the connection list on the new client which was created.

d) Within smbd_add_connection(), we also add a tevent fd handler smbd_server_connection_handler() to handle incoming data on the new socket created for the client.

We also setup the infrastructure necessary to pass the socket file descriptor when a new client is created within smbd_process()->smbXsrv_client_create(), we setup the messaging infrastructure to handle incoming message requests for the message id MSG_SMBXSRV_CONNECTION_PASS.

NTSTATUS smbXsrv_client_create(TALLOC_CTX *mem_ctx,
                               struct tevent_context *ev_ctx,
                               struct messaging_context *msg_ctx,
                               NTTIME now,
                               struct smbXsrv_client **_client)
{
..
        global->server_id = messaging_server_id(client->msg_ctx);
..
        subreq = messaging_filtered_read_send(client,
                                        client->raw_ev_ctx,
                                        client->msg_ctx,
                                        smbXsrv_client_connection_pass_filter,
                                        client);
..
        tevent_req_set_callback(subreq, smbXsrv_client_connection_pass_loop, client);
..
}  
ie. For incoming requests for message id MSG_SMBXSRV_CONNECTION_PASS, we call handler smbXsrv_client_connection_pass_loop()

At this point, the socket is established. When data is first sent onto the socket by the client, it is handled by the tevent handler smbd_server_connection_handler() followed by smbd_server_connection_read_handler() which subsequently calls process_smb() to process the incoming request.

static void smbd_server_connection_handler(struct tevent_context *ev,
                                           struct tevent_fd *fde,
                                           uint16_t flags,
                                           void *private_data)
{
..
        //xconn is passed as argument to the tevent callback. We read this argument
        struct smbXsrv_connection *xconn =
                talloc_get_type_abort(private_data,
                struct smbXsrv_connection);
..
        if (flags & TEVENT_FD_READ) {
                smbd_server_connection_read_handler(xconn, xconn->transport.sock);
                return;
        }
}

//Used to handle all incoming read calls.
static void smbd_server_connection_read_handler(
        struct smbXsrv_connection *xconn, int fd)
{
..
process:
        process_smb(xconn, inbuf, inbuf_len, unread_bytes,
                    seqnum, encrypted, NULL);
}


It is here where we start differentiating between SMB1 and later connections

void smbd_smb2_process_negprot(struct smbXsrv_connection *xconn,
                               uint64_t expected_seq_low,
                               const uint8_t *inpdu, size_t size)
{
..
        struct smbd_smb2_request *req = NULL;
..
        //Documented below
        status = smbd_smb2_request_create(xconn, inpdu, size, &req);

..
        status = smbd_smb2_request_dispatch(req);
..
}

static NTSTATUS smbd_smb2_request_create(struct smbXsrv_connection *xconn,
                                         const uint8_t *_inpdu, size_t size,
                                         struct smbd_smb2_request **_req)
{
        struct smbd_server_connection *sconn = xconn->client->sconn;
..
        struct smbd_smb2_request *req;
..
        req = smbd_smb2_request_allocate(xconn);
..
        req->sconn = sconn;
        req->xconn = xconn;
..
        status = smbd_smb2_inbuf_parse_compound(xconn,
                                                now,
                                                inpdu,
                                                size,
                                                req, &req->in.vector,
                                                &req->in.vector_count);
..
        *_req = req;
        return NT_STATUS_OK;
}
At this point the buffer containing the incoming request is stored in the smbd_smb2_request *req.

We call smbd_smb2_request_dispatch() to handle the data.

NTSTATUS smbd_smb2_request_dispatch(struct smbd_smb2_request *req)
{
        struct smbXsrv_connection *xconn = req->xconn;
..
        /*
         * Check if the client provided a valid session id.
         *
         * As some command don't require a valid session id
         * we defer the check of the session_status
         */
        session_status = smbd_smb2_request_check_session(req);
..
        flags = IVAL(inhdr, SMB2_HDR_FLAGS);
        opcode = SVAL(inhdr, SMB2_HDR_OPCODE);
        mid = BVAL(inhdr, SMB2_HDR_MESSAGE_ID);
..
        switch (opcode) {
..
        case SMB2_OP_NEGPROT:
                SMBPROFILE_IOBYTES_ASYNC_START(smb2_negprot, profile_p,
                                               req->profile, _INBYTES(req));
                return_value = smbd_smb2_request_process_negprot(req);
                break;
..
}


Since this is the first call sent by the client, it is a negotiate request which is handled by smbd_smb2_request_process_negprot().

NTSTATUS smbd_smb2_request_process_negprot(struct smbd_smb2_request *req)
{
..
        //Obtain the GUID passed i
        in_guid_blob = data_blob_const(inbody + 0x0C, 16);
..
        status = GUID_from_ndr_blob(&in_guid_blob, &in_guid);
..
        xconn->smb2.client.guid = in_guid;
..
        if (xconn->protocol < PROTOCOL_SMB2_10) {
                /*
                 * SMB2_02 doesn't support client guids
                 */
                return smbd_smb2_request_done(req, outbody, &outdyn);
        }
        //Only SMB3 and later protocols here.

        if (!xconn->client->server_multi_channel_enabled) {
                /*
                 * Only deal with the client guid database
                 * if multi-channel is enabled.
                 */
                return smbd_smb2_request_done(req, outbody, &outdyn);
        }
        //Only clients with multichannel enabled here.
..
        status = smb2srv_client_lookup_global(xconn->client,
                                              xconn->smb2.client.guid,
                                              req, &global0);
..
        if (NT_STATUS_EQUAL(status, NT_STATUS_OBJECTID_NOT_FOUND)) {
        //If no existing connection is found, set it up.
                xconn->client->global->client_guid =
                        xconn->smb2.client.guid;
                status = smbXsrv_client_update(xconn->client);
..
                xconn->smb2.client.guid_verified = true;
        } else if (NT_STATUS_IS_OK(status)) {
        //We have found an existing client with the same guid.
        //So pass the connection to the original smbd process.
                status = smb2srv_client_connection_pass(req,
                                                        global0);

                if (!NT_STATUS_IS_OK(status)) {
                        return smbd_smb2_request_error(req, status);
                }
        //and terminate this connection.
                smbd_server_connection_terminate(xconn,
                                                 "passed connection");
                return NT_STATUS_OBJECTID_EXISTS;
        } else {
                return smbd_smb2_request_error(req, status);
        }

}

NTSTATUS smb2srv_client_connection_pass(struct smbd_smb2_request *smb2req,
                                        struct smbXsrv_client_global0 *global)
{
..
        pass_info0.initial_connect_time = global->initial_connect_time;
        pass_info0.client_guid = global->client_guid;
..
        pass_info0.negotiate_request.length = reqlen;
        pass_info0.negotiate_request.data = talloc_array(talloc_tos(), uint8_t,
                                                         reqlen);
..
        iov_buf(smb2req->in.vector, smb2req->in.vector_count,
                pass_info0.negotiate_request.data,
                pass_info0.negotiate_request.length);

        ZERO_STRUCT(pass_blob);
        pass_blob.version = smbXsrv_version_global_current();
        pass_blob.info.info0 = &pass_info0;
..
        ndr_err = ndr_push_struct_blob(&blob, talloc_tos(), &pass_blob,
                        (ndr_push_flags_fn_t)ndr_push_smbXsrv_connection_passB);
..
        //Add the created data blobs to an iov
        iov.iov_base = blob.data;
        iov.iov_len = blob.length;

        //and send the iovs to the original thread using
        //message id MSG_SMBXSRV_CONNECTION_PASS.
        status = messaging_send_iov(smb2req->xconn->client->msg_ctx,
                                    global->server_id,
                                    MSG_SMBXSRV_CONNECTION_PASS,
                                    &iov, 1,
                                    &smb2req->xconn->transport.sock, 1);
..
}
At this point, the smbd process for the new process sends the original smbd process a message with the data required to transfer the channel to the original process.

We call the handler for the message and process the incoming data.

static void smbXsrv_client_connection_pass_loop(struct tevent_req *subreq)
{
..
        //We read data from the iovs passed in the message.
..
        //We perform some sanity tests.
..
        SMB_ASSERT(rec->num_fds == 1);
        sock_fd = rec->fds[0];
..
        //We add the new connection to the original smbd process client.
        status = smbd_add_connection(client, sock_fd, &xconn);
..
        //We process the negprot on the original thread.
        xconn->smb2.client.guid_verified = true;
        smbd_smb2_process_negprot(xconn, seq_low,
                                  pass_info0->negotiate_request.data,
                                  pass_info0->negotiate_request.length);
..
}


At this point, we have
a) Added a new connection xconn to the existing client from the original connection.
b) Set the data handler for the socket file descriptor to smbd_server_connection_handler() so that any incoming data is handled by the samba thread handling the original connection.
c) Terminated the new samba thread created for the new channel and handle all new incoming request in handler specified in b.

Setting a frost alarm on Home assistant

One of the issues with winter is the possibility of ice covering your windscreen which needs to be cleared before you can drive. Clearing ou...