tag:blogger.com,1999:blog-58685284253459124052024-02-21T08:15:26.602+00:00andyfrench.info“If it hurts, do it more frequently, and bring the pain forward.” - Jez HumbleAndy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comBlogger278125tag:blogger.com,1999:blog-5868528425345912405.post-66160689977724851532022-01-15T22:05:00.003+00:002022-01-15T22:08:21.209+00:00File Explorer very slow to open after upgrade to Windows 11<h2 style="text-align: left;">The Problem</h2><p>After upgrading from Windows 10 to Windows 11 the File Explorer takes a very long time to open, sometimes up to a minute. </p><h2 style="text-align: left;">The Solution</h2><p>The solution that worked for me was to delete all the files in the following directory:</p><p><span style="font-family: courier;">%AppData%\Microsoft\Windows\Recent\AutomaticDestinations</span></p><p>I believe the files stored in this location are "jump list" files used by Windows to allow you quick access to files from applications pinned to the task bar. By to right-clicking an application icon pinned to your taskbar you get a pop-up menu providing quick access to recent, pinned, or frequently-accessed files. The jump lists are essentially the data that make this possible.</p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-41164066494342472652020-12-15T21:58:00.002+00:002020-12-15T21:58:32.141+00:00Render Layers node in Blender compositing tab not visible<h2 style="text-align: left;">Problem</h2><p><b>Blender version:</b> 2.91.0</p><p>I was following the Blender Guru tutorial to create a doughnut in Blender. In <a href="https://youtu.be/5lr8QnR5WWU">Part 7, Level 1</a> there is a section dealing with denoising. You are required to use the 'Compositing' tab which should have a node called 'Render Layers' visible. Not only was that node not visible but the Shift-A command to add nodes did not allow anything to be added.</p><h2 style="text-align: left;">Solution</h2><p>So simple, it hurts but I lost half an hour of my life trying to find it.</p><p>Click the 'Use Nodes' checkbox. Render Layers immediately appears.</p><p><br /></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqlWe_xvcns8SecbN9TuPzdqrvnYcAKqYp5ayrk_ZPtoN8qPWDVS_dzYRUHV7a2BiGBl2mykGip3p8JbALPQbGDmfDyS7eK3S8v3eRBEPWGGYriWb5-eBiIyFk0Q6TzhYLAnlbRJ5h72g/" style="margin-left: 1em; margin-right: 1em;"><img data-original-height="574" data-original-width="595" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqlWe_xvcns8SecbN9TuPzdqrvnYcAKqYp5ayrk_ZPtoN8qPWDVS_dzYRUHV7a2BiGBl2mykGip3p8JbALPQbGDmfDyS7eK3S8v3eRBEPWGGYriWb5-eBiIyFk0Q6TzhYLAnlbRJ5h72g/s16000/image.png" /></a></div>Idiot.<p></p><p><br /></p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-53431489570032157152020-10-06T16:44:00.005+01:002020-10-06T16:44:57.825+01:00WSL shell in ConEmu<p>Here's a quick note to remind me how I setup a WSL shell in ConEmu.</p><p>I just created a new task with the following command:</p><pre class="brush: text;">set PATH="%ConEmuBaseDirShort%\wsl";%PATH% & wsl</pre><p>Simple as that.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1XaQsn5xcGmzJQmm5NxvFW2gxdYJ__nex1eIZuUfGKekhIc7_ObaBYIF6Ucd040h1fnHyBIXQhbX15SgIZusM7NFgGduAnrh0bGsumtWJVtftnSBQqVqwRgrZhxCaLxmDJBZG31bev7I/" style="margin-left: 1em; margin-right: 1em;"><img data-original-height="515" data-original-width="763" height="432" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1XaQsn5xcGmzJQmm5NxvFW2gxdYJ__nex1eIZuUfGKekhIc7_ObaBYIF6Ucd040h1fnHyBIXQhbX15SgIZusM7NFgGduAnrh0bGsumtWJVtftnSBQqVqwRgrZhxCaLxmDJBZG31bev7I/w640-h432/image.png" width="640" /></a></div><br /><br /><p></p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-29973821264770717072020-10-01T12:21:00.008+01:002020-10-06T16:45:36.092+01:00Error 0xc03a001a when installing Windows Subsystem for Linux<h2 style="text-align: left;">The Problem</h2><p>Here's a quick note to help if you have issues installing the Windows Subsystem for Linux, specifically the following error:</p><pre class="brush: text">"WslRegisterDistribution failed with error: 0xc03a001a"</pre><p>I was trying to install Ubuntu 20.04 LTS on Windows 10 Home (10.0.18363) following the instructions found here:</p><p></p><p></p><p></p><p></p><ul style="text-align: left;"><li><a href="https://docs.microsoft.com/en-us/windows/wsl/install-win10">https://docs.microsoft.com/en-us/windows/wsl/install-win10</a></li></ul><div><br /></div><div>The issue occurred when I was trying to launch Ubuntu from the Store (Step 7 in the instructions). The error message appears in the command window.</div>
<h2>The Solution</h2>
<p>There are several posts out there that can help:</p><p></p><ul style="text-align: left;"><li><a href="https://utf9k.net/blog/wsl2-vhd-issue/">https://utf9k.net/blog/wsl2-vhd-issue/</a></li><li><a href="https://github.com/microsoft/WSL/issues/4299">https://github.com/microsoft/WSL/issues/4299</a></li></ul>
<div><br /></div>
<p>For me the solution was simple, I navigated to the %LOCALAPPDATA%/packages/ folder and located the Ubuntu distribution package.</p><div><br /></div><div>
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDKOO3EtQFVw-dN2JcJWOsg8S81i07TlXT6s7ufp_6vBU2w1UqidxK3FmGSNoGl0ED6HyyuuIpbmIdIbAsxFLwwyAGbdNDhvRubtDKwOKP9G9aEA6R1DvtA6_ho3grGyOWZd7muqvy1eE/w640-h357/image.png"/>
<p><b><i>Figure 1 - Locate the package</i></b></p>
<p>Right-click on the package and select General > Advanced. Uncheck "Compress contents to save disk space".</p><p></p><p></p>
<img alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9-k-0qcCMyj40hnrCOv8jFMvX7pvI91x2yHZjyKGqUqfm7I8Lq3ayeYBNCo2yEh3GdiQDzY88GmKxLU9gQPTLvUG8hyphenhyphenMvIeN4h5MjEdAnu9D1aMo27obarNz0k7YB9QodXlKAYyhst1A/s16000/image.png" />
<p><b><i>Figure 2 - Uncheck "Compress contents to save disk space"</i></b></p>
<p>Now relaunch the Linux distribution from the store. Everything should now work.</p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-67454183987221884322020-03-23T16:55:00.001+00:002020-03-28T17:40:41.701+00:00MassTransit, SQS, .Net Core Worker Services, Linux and Docker!<p>Be aware, this blog post contains my notes on some investigation work I recently undertook with MassTransit, the open source service bus for .Net. The code contained herein was just sufficient to answer some basic questions around whether a MassTransit-based endpoint could be hosted in a .Net Core 3.1 Worker Service running on Linux.</p><br />
<p><b>NB:</b> This is all “hello world” style code so don’t look here for the best way to do things.</p><br />
<p>My objectives were:</p><br />
<ul><li>To configure MassTransit to run in a .Net Core Worker Service. </li>
<li>To test a MassTransit-based Worker Service running in a Linux container using Docker.</li>
<li>To test a MassTransit-based Worker Service running on a Linux host as a systemd daemon.</li>
<li>To use AWS SQS as the transport for MassTransit.</li>
</ul><br />
<h2>MassTransit in a Worker Service</h2><p>This step turned out to straightforward. Again, to reiterate my approach almost certainly isn’t best practice but served only to demonstrate the feasibility of the approach.</p><br />
<p>I created a Worker Service using Visual Studio and adapted the template project to suit. First up I installed a few NuGet packages to enable MassTransit using AWS SQS transport, and support for dependency injection. I also added support to deploy the Worker Service as a systemd daemon.</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhahSL6JQZyaj7lVaHJsFSrYhzLPtVnfnaFUy1uaSYYNDsnj7yZ-X7P9DYgYb4GV3fcmmsNhBeQetIIxRLOjmQNPlvrxr6lutsdUdfsM6Me2v_0MA5qV7ty0EEv5GiLRQPbn1dak0w3t4k/s1600/mt001.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="275" data-original-width="984" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhahSL6JQZyaj7lVaHJsFSrYhzLPtVnfnaFUy1uaSYYNDsnj7yZ-X7P9DYgYb4GV3fcmmsNhBeQetIIxRLOjmQNPlvrxr6lutsdUdfsM6Me2v_0MA5qV7ty0EEv5GiLRQPbn1dak0w3t4k/s1600/mt001.png" /></a></div><br />
<br />
<p>I was then able to configure the MassTransit bus using dependency injection. However, I chose not to start the bus at this point but rather use the Worker Service lifetime events to do that (described later).</p><br />
<pre class="brush: csharp">public class Program
{
public static void Main(string[] args)
{
var host = CreateHostBuilder(args).Build();
var logger = host.Services.GetRequiredService<ILogger<Program>>();
logger.LogInformation("Responder2 running.");
host.Run();
}
private static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.UseSystemd()
.ConfigureServices((hostContext, services) =>
{
services.AddHostedService<Worker>();
services.AddMassTransit(serviceCollectionConfigurator =>
{
serviceCollectionConfigurator.AddConsumer<Request2Consumer>();
serviceCollectionConfigurator.AddBus(serviceProvider =>
Bus.Factory.CreateUsingAmazonSqs(busFactoryConfigurator =>
{
busFactoryConfigurator.Host("eu-west-2", h =>
{
h.AccessKey("xxx");
h.SecretKey("xxx");
});
busFactoryConfigurator.ReceiveEndpoint("responder2_request2_endpoint",
endpointConfigurator =>
{
endpointConfigurator.Consumer<Request2Consumer>(serviceProvider);
});
}));
});
});
}
</pre><br />
<p>Note that I setup a simple message consumer which would respond using request response, just for testing purposes.</p><br />
<pre class="brush: csharp">public class Request2Consumer : IConsumer<IRequest2>
{
private readonly ILogger<Request2Consumer> _logger;
public Request2Consumer(ILogger<Request2Consumer> logger)
{
_logger = logger;
}
public async Task Consume(ConsumeContext<IRequest2> context)
{
_logger.LogInformation("Request received.");
await context.RespondAsync<IResponse2>(new { Text = "Here's your response 2." });
}
}
</pre><br />
<p>For the purposes of this example I chose to use the Worker Service hosted service lifetime events to start and stop the MassTransit bus. Rather than inherit from the BackgroundService base class – which provides an ExecuteAsync(CancellationToken) method which I didn’t need – I inherited directly from the IHostedService and used the StartAsync(CancellationToken) and StopAsync(CancellationToken) methods to start and stop the bus respectively.</p><br />
<pre class="brush: csharp">public class Worker : IHostedService
{
private readonly ILogger<Worker> _logger;
private IBusControl _bus;
public Worker(ILogger<Worker> logger, IBusControl bus)
{
_logger = logger;
_bus = bus;
}
public Task StartAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Starting the bus...");
return _bus.StartAsync(cancellationToken);
}
public Task StopAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Stopping the bus...");
return _bus.StopAsync(cancellationToken);
}
}
</pre><br />
<p>I used a separate console application (not shown here) to send messages to the Worker Service which I initially ran as a console application on Windows. That worked pretty much immediately proving that the .UseSystemd() call would noop when running in this mode, as expected. Great for development on Windows.</p><br />
<h2>Worker Service in Docker</h2><p>Running the Worker Service in Docker was also quite straightforward. I chose to use the sdk:3.1-bionic Docker image which uses Ubuntu 18.04. This choice of Linux flavour was arbitrary.</p><br />
<p>I had intended to try and get systemd running in the Docker container but I quickly realised that such an approach would run contrary to how you should use Docker. With Docker, you really want to containerise your application. Services such as systemd aren’t available to you. So, I simple invoked the Worker Service DLL with dotnet as an entry point.</p><br />
<pre class="brush: text"># Build with SDK image
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-bionic AS build
WORKDIR /app
COPY . ./
RUN dotnet restore \
& dotnet publish ./MassTransit.WorkerService.Test.Responder/MassTransit.WorkerService.Test.Responder.csproj -c Release -o out
# Run with runtime image
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-bionic
WORKDIR /app
COPY --from=build /app/out .
ENTRYPOINT ["dotnet", "MassTransit.WorkerService.Test.Responder.dll"]
</pre><br />
<p>This worked as expected with the Worker Service responding to message sent to it by my test console application. In effect, the Worker Service was running as if it was a console application.</p><br />
<p>So, build the Docker image…</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgknCH4ye6ntNv8PnJxoGOP9d0YsHjNzmKVMb3y1lO4naazfrSj20WwYJovp4nAllvYOnW_pTcESzvUd8mwZGsjKG5RdcBmKrFzyqUZalHoN90hx76b3Sb2UqsImyL5HvGdBEiBpfiHFlk/s1600/mt002.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="512" data-original-width="979" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgknCH4ye6ntNv8PnJxoGOP9d0YsHjNzmKVMb3y1lO4naazfrSj20WwYJovp4nAllvYOnW_pTcESzvUd8mwZGsjKG5RdcBmKrFzyqUZalHoN90hx76b3Sb2UqsImyL5HvGdBEiBpfiHFlk/s1600/mt002.png" /></a></div><br />
<p>Run the container…</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgYIRher8A5r1qsOgWsacc6eQe1uMlKnfTiGNZmr2fGwUjun5FZ5rVVN8_ndOfl6rSEUq9goYEDbcOH3qMCHmiPaPff-79IkZV1SCpa6IXEIN7GkSfiEBNmu7Z1lEsTUE8ILN7PY7AzhA/s1600/mt003.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="512" data-original-width="979" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgYIRher8A5r1qsOgWsacc6eQe1uMlKnfTiGNZmr2fGwUjun5FZ5rVVN8_ndOfl6rSEUq9goYEDbcOH3qMCHmiPaPff-79IkZV1SCpa6IXEIN7GkSfiEBNmu7Z1lEsTUE8ILN7PY7AzhA/s1600/mt003.png" /></a></div><br />
<p>All looks good so use a test console application to send a message and see what happens…</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSTl69P9t2tb_Ncqf2UpEzAFJ3eZE1vAArmNSQb0m-xREmNTyt40f4QoA8y9ny48SmpA6QFnEGni-2cuVAhPC8v0D5X2OaghMMkxumB8qQXcWozJotaVGCn0u0cEhBMZbPN2KslJwqgdI/s1600/mt004.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="740" data-original-width="1221" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSTl69P9t2tb_Ncqf2UpEzAFJ3eZE1vAArmNSQb0m-xREmNTyt40f4QoA8y9ny48SmpA6QFnEGni-2cuVAhPC8v0D5X2OaghMMkxumB8qQXcWozJotaVGCn0u0cEhBMZbPN2KslJwqgdI/s1600/mt004.png" /></a></div><br />
<p>Worked like a charm.</p><br />
<h2>Worker Service as a systemd daemon</h2><p>It’s worth saying up front that I found the following blog post of considerable use:</p><br />
<ul><li><a href="https://devblogs.microsoft.com/dotnet/net-core-and-systemd/">https://devblogs.microsoft.com/dotnet/net-core-and-systemd/</a></li>
</ul><br />
<p>To test this out I could have spun up a Linux machine in AWS (other cloud platform providers are available) but I chose to experiment with a Raspberry Pi. Note that the code for the Worker Service was unchanged from when it was run in Docker (see above).</p><br />
<p>To run the Worker service as a systemd daemon on Linux you first need to build the application with the appropriate Target Runtime. See the following documents for more information on that:</p><br />
<ul><li><a href="https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-build">https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-build</a></li>
<li><a href="https://docs.microsoft.com/en-us/dotnet/core/rid-catalog">https://docs.microsoft.com/en-us/dotnet/core/rid-catalog</a></li>
</ul><br />
<p>However, being a bit lazy I chose to publish direct from Visual Studio and to copy the files to the Pi manually later. For this I selected the linux-arm runtime. Note also that to avoid installing the dotnet runtime on the devoice I chose the Deployment Mode of Self-contained.</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4pP0vD3qS0s6HGLigzfHqS9UmG_B4xuWDxInaVF3Y4dFaYgVPwX52vbhx58Bs6-tKUVemvyCMlcP-7hb4-u5wir4OzKX3mLJCIuyTDG1bGLURp8Ljt_3CqQ9yBIW58I-hL1govamC_co/s1600/mt005.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="476" data-original-width="716" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4pP0vD3qS0s6HGLigzfHqS9UmG_B4xuWDxInaVF3Y4dFaYgVPwX52vbhx58Bs6-tKUVemvyCMlcP-7hb4-u5wir4OzKX3mLJCIuyTDG1bGLURp8Ljt_3CqQ9yBIW58I-hL1govamC_co/s1600/mt005.png" /></a></div><br />
<p>The effect of selecting linux-arm as the Target Runtime is that there’s an extension-less binary that’s executable on Linux included in the published folder. This file is important later on when we configure systemd.</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwTZ2EXB_hSmvrH_SshErkpkKiQkINV8AKvpm-kgucMM6m5bfMAJHcKJ5ir8aUUruXD8OzEdxK2vj0D8xC8ecAydpzzg2f1bheU-uE5IELUP9Qo7_W_kOV8E1bRoYR2bHFs87synMoFhI/s1600/mt006.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="307" data-original-width="993" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwTZ2EXB_hSmvrH_SshErkpkKiQkINV8AKvpm-kgucMM6m5bfMAJHcKJ5ir8aUUruXD8OzEdxK2vj0D8xC8ecAydpzzg2f1bheU-uE5IELUP9Qo7_W_kOV8E1bRoYR2bHFs87synMoFhI/s1600/mt006.png" /></a></div><br />
<p>The next step is to copy the files to the target device.I copied mine to a directory called /app/linux-arm. Note that you also need to set executable permissions so systemd can run the application.</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguY-Xr5CoBOzG2AbM3mbpn0csbShrsGvF3OAc7GWGBmfyWQErojT0o3cF_WilYKm0HlotHg-qnuTZz9vr_BaRrgMm8dIdU83_tGVrryjgFSee7h5Qh9TNPsjde0NjF2Q-7ep5CkJk74QI/s1600/mt007.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="499" data-original-width="877" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguY-Xr5CoBOzG2AbM3mbpn0csbShrsGvF3OAc7GWGBmfyWQErojT0o3cF_WilYKm0HlotHg-qnuTZz9vr_BaRrgMm8dIdU83_tGVrryjgFSee7h5Qh9TNPsjde0NjF2Q-7ep5CkJk74QI/s1600/mt007.png" /></a></div><br />
<p>The next steps follow this post: <a href="https://devblogs.microsoft.com/dotnet/net-core-and-systemd/">https://devblogs.microsoft.com/dotnet/net-core-and-systemd/</a></p><br />
<p>My .service file looked like this:</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEoQQ_jvXzm6LXY215AgHUvG5R3z8B-xnX6bC7L0jqO0RivYdjV0FHwqLecI7mnrQSUgCSnmpUHGPhbdK1q_DNQeLRWiGsi4RzyZWyjiju-sgyqM5r-71aZ3MPt-ZAcZP3SCYgzdDyCcM/s1600/mt008.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="358" data-original-width="724" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEoQQ_jvXzm6LXY215AgHUvG5R3z8B-xnX6bC7L0jqO0RivYdjV0FHwqLecI7mnrQSUgCSnmpUHGPhbdK1q_DNQeLRWiGsi4RzyZWyjiju-sgyqM5r-71aZ3MPt-ZAcZP3SCYgzdDyCcM/s1600/mt008.png" /></a></div><br />
<br />
<p>Note that ExecStart must point at the extension-less executable created when you built the application with a Linux target runtime. As you can see, the daemon was now up-and-running.</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzZ-7jCq_OuY5Hs5b5PkUsrg4zRKQqPz22HW2k4SNdFVbyRaNn8JC_WO7YiZ4vdTJhhMfgID4pbrF5OiIItf2okonReqh2lXnokvSAVt40FRM8hcu2Kk-Qi_WazxVbViWGTaYpUzWP154/s1600/mt009.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="376" data-original-width="954" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzZ-7jCq_OuY5Hs5b5PkUsrg4zRKQqPz22HW2k4SNdFVbyRaNn8JC_WO7YiZ4vdTJhhMfgID4pbrF5OiIItf2okonReqh2lXnokvSAVt40FRM8hcu2Kk-Qi_WazxVbViWGTaYpUzWP154/s1600/mt009.png" /></a></div><br />
<br />
<p>Let’s fire a test message at it and see what happens. To monitor the log output I ran the following command: journalctl -xef.</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIue7hrZFFc0EEPD8w8iu86Pbjn43NOUGNu5qxfq8zyMugA9nzYhAMZc_xh5ejKofVCQTOXJwP74VNC2mbFQpfT9JQjC3Nkjb4RiElXKMYfL4jZokLEhMgyM5i8GlyR-rsYEl5gqyfhjw/s1600/mt010.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="428" data-original-width="1198" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIue7hrZFFc0EEPD8w8iu86Pbjn43NOUGNu5qxfq8zyMugA9nzYhAMZc_xh5ejKofVCQTOXJwP74VNC2mbFQpfT9JQjC3Nkjb4RiElXKMYfL4jZokLEhMgyM5i8GlyR-rsYEl5gqyfhjw/s1600/mt010.png" /></a></div><br />
<p>Job done!</p><br />
<h2>Conclusion</h2><p>All objectives were met. I was able to run MassTransit in a Worker Service and to host that worker service in a Linux Docker container and as a systemd daemon on a Linux host.<br />
If I were to add .UseWindowsService() to the Worker Service it would also be possible to host the service as a Windows Service.</p><br />
Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-82498529899784221872018-10-13T11:16:00.001+01:002018-10-13T11:21:24.239+01:00AWS Cognito integration with lambda functions using the Serverless Framework<h2>Problem</h2><p>I have been writing an AWS lambda service based on the <a href="https://serverless.com/">Serverless Framework</a>. The question is, how do I secure the lambda using AWS Cognito? </p><p>Note that this post deals with Serverless Framework configuration and not how you setup Cogito user pools and clients etc. It is also assumed that you understand the basics of the Serverless Framework.</p><h2>Solution</h2><h3>Basic authorizer configuration</h3><p>Securing a lambda function with Cognito can be very simple. All you need to do is add some additional configuration – an authorizer - to your function in the serverless.yml file. Here’s an example:</p>
<pre class="brush: xml;">functionName:
handler: My.Assembly::My.Namespace.MyClass::MyMethod
events:
- http:
path: mypath/{id}
method: get
cors:
origin: '*'
headers:
- Authorization
authorizer:
name: name-of-authorizer
arn: arn:aws:cognito-idp:eu-west-1:000000000000:userpool/eu-west-1_000000000
</pre>
<p>Give the authorizer a name (this will be the name of the authorizer that’s created in the API gateway). Also provide the ARN of the user pool containing the user accounts to be used for authentication. You can get the ARN from the AWS Cognito console.</p><p><a href="https://lh3.googleusercontent.com/-2iznSkqFEZ0/W8HF5tTe4vI/AAAAAAAAD1k/wFxy3xOVmksTsi2zWNnBu_XN4Q9v83vAwCHMYCw/s1600-h/SNAGHTML2d99d5f%255B6%255D"><img width="1221" height="409" title="SNAGHTML2d99d5f" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML2d99d5f" src="https://lh3.googleusercontent.com/-LEkHcnRRpj8/W8HF6NCWYMI/AAAAAAAAD1o/sjQD7Wj-mmUSNRh_pmZLB7YE9MxSNdOEACHMYCw/SNAGHTML2d99d5f_thumb%255B3%255D?imgmax=800" border="0"></a></p><p>After you have deployed your service using the Serverless Framework (sls deploy) an authorizer with the name you have given it will be created. You can find it in the AWS console.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJ1o9jOzAzIt1ux2zCGcvyNJxZCLj2N_fHIT8rEJqlESP1RFIj7H5Y0ULM-mMnZG5TVxqEEP5M6zNqgEliIDnLkzVdQyMgrQ36finTnQG2XX3wMe6fY4QKIa1zdw6Ka2H0EATWq5yc8gY/s1600-h/SNAGHTML2dc34b3%255B5%255D"><img width="1115" height="554" title="SNAGHTML2dc34b3" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML2dc34b3" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyiC4QO0-IyfUdPO9aATjiACcWv86UlRsZuBeLsIl-a4j__l314KLEKpsCz_tijxFfvuVdnex4ywDGDogKGgo8g66cd8xHMOw1EeWzK6hH06FUGjiMXKPtpYGp1mahfitutwRbou-coFk/?imgmax=800" border="0"></a>There is a limitation with this approach however. If you add an authorizer to each of you lambda functions like this you the number of authorizers will quickly proliferate. <a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html">AWS limits the number of authorizers per API</a> to 10 so for complex APIs you may run out of authorizers.</p><p>An alternative is to use a shared authorizer.</p><h3>Configuring a shared authorizer</h3><p>It is possible to configure a single authorizer with the Serverless Framework and share it across all the functions in your API. Here’s an example:</p>
<pre class="brush: xml;">functionName:
handler: My.Assembly::My.Namespace.MyClass::MyMethod
events:
- http:
path: mypath/{id}
method: get
cors:
origin: '*'
headers:
- Authorization
authorizer:
type: COGNITO_USER_POOLS
authorizerId:
Ref: ApiGatewayAuthorizer
resources:
Resources:
ApiGatewayAuthorizer:
Type: AWS::ApiGateway::Authorizer
Properties:
AuthorizerResultTtlInSeconds: 300
IdentitySource: method.request.header.Authorization
Name: name-of-authorizer
RestApiId:
Ref: "ApiGatewayRestApi"
Type: COGNITO_USER_POOLS
ProviderARNs:
- arn: arn:aws:cognito-idp:eu-west-1:000000000000:userpool/eu-west-1_000000000
</pre>
<p>As you can see we have created an authorizer as a resource and referenced it from the lambda function. So, you can now refer to the same authorizer (called ApiGatewayAuthorizer in this case) from each of your lambda functions. Only one authorizer will be created in the API Gateway.</p><p>Note that the shared authorizer specifies an IdentitySource. In this case it’s an Authorization header in the HTTP request.</p><h2>Accessing an API using an Authorization header</h2><p>Once you have secured you API using Cognito you will need to pass an Identity Token as part of your HTTP request. If you are calling your API from a JavaScript-based application you could use <a href="https://aws-amplify.github.io/amplify-js/media/authentication_guide">Amplify which has support for Cognito</a>. </p><p>For testing using an HTTP client such as <a href="https://www.getpostman.com/">Postman</a> you’ll need to get an Identity Token from Cognito. You can do this using the AWS CLI. Here’s as example:</p>
<pre class="brush: bash;">aws cognito-idp admin-initiate-auth --user-pool-id eu-west-1_000000000 --client-id 00000000000000000000000000 --auth-flow ADMIN_NO_SRP_AUTH --auth-parameters USERNAME=user_name_here,PASSWORD=password_here --region eu-west-1
</pre>
<p>Obviously you’ll need to change the various parameters to match your environment (user pool ID, client ID, user name etc.). This will return 3 tokens: IdToken, RefreshToken, and BearerToken. </p><p>Copy the IdToken and paste it in to the Authorization header of your HTTP request.</p><p><a href="https://lh3.googleusercontent.com/-1uvpsz1uaRY/W8HF7lKsTXI/AAAAAAAAD10/vY9y6fdzF_wl11KCgA7IFGXRiL8MGkQZQCHMYCw/s1600-h/SNAGHTML2f7516f%255B5%255D"><img width="1000" height="471" title="SNAGHTML2f7516f" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML2f7516f" src="https://lh3.googleusercontent.com/-yCbe6VcqVy0/W8HF8LDe5vI/AAAAAAAAD14/OlJl67BHh3EVDBPVn8oOtmwJLnanX6fJwCHMYCw/SNAGHTML2f7516f_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>That’s it.</p><h2>Accessing claims in your function handler</h2><p>As a final note this is how you can access Cognito claims in your lambda function. I use .Net Core so the following example is in C#. The way to get the claims is to go via the incoming request object:</p>
<pre class="brush: csharp;">foreach (var claim in request.RequestContext.Authorizer.Claims)
{
Console.WriteLine("{0} : {1}", claim.Key, claim.Value);
}
</pre>
<p><br></p><h2>See also</h2><ul><li><a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html">Amazon API Gateway Limits and Known Issues</a></li><li><a href="https://serverless.com/framework/docs/providers/aws/events/apigateway/">API Gateway (Serverless Framework documentation)</a></li><li><a href="https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/index.html">AWS cognito-idp documentation</a>.</li></ul>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-74637493248573266622018-08-04T12:50:00.001+01:002018-08-04T12:50:03.368+01:00How to send files to a Raspberry Pi from Windows 10<p>This post refers to a Raspberry Pi 3b+ running Raspbian Stretch.</p><p>A quick note; I’m going to use the PuTTy Secure Copy client (PSCP) because I have the PuTTy tools installed on my Windows machine.</p><p><a href="https://lh3.googleusercontent.com/-5rbT3ODSUJM/W2WS4T_5FKI/AAAAAAAADyY/psDO-hKVo6MU2VtUONj7CGnxc3p6XLFQACHMYCw/s1600-h/image%255B4%255D"><img width="928" height="680" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEix3dnmvUkWSKAYjQ4PQRwPqB0NLVoYnY-bhRUw9ri_MxHhVEy4AuZSguTa2sK12NnVT-u91kvLiel3mufq6cZRAojoiqGrQvXLynF2tI_ujzgKQDwwpisxlSrlFEWfXbdF9-pnnrTZVUo/?imgmax=800" border="0"></a></p><p>In this example I want to copy a file to the Raspberry Pi home directory from my Windows machine. Here’s the command format to run:</p><pre class="brush: bash">pscp -pw pi-password-here filename-here pi@pi-ip-address-here:/home/pi</pre><p>Replace the following with the appropriate values:</p><ul><li><em>pi-password-here</em> with the Pi user password</li><li><em>filename-here</em> with the name of the file to copy</li><li><em>pi-ip-address-here</em> with the IP address of the Raspberry Pi</li></ul><p><br></p><p>The following example includes the –r option to copy over a directory – actually a Plex plugin – rather than a single file to the Pi.</p><p><a href="https://lh3.googleusercontent.com/-WQs2dIxCgOo/W2WS5-f-r7I/AAAAAAAADyg/qAeDYyKIHnw-2Y389G8ATnCD38UaBTx3QCHMYCw/s1600-h/SNAGHTML874574a%255B5%255D"><img width="642" height="647" title="SNAGHTML874574a" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML874574a" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCGbJ4OhxHs4JrkxEVxFWqiuMDVtFL9_l84PnXDRSSAKWTZwVd9P4hgtL9DHbhwtSA2o8fsCCvhmSkvaR9am0bRazE7ixEwCKfpyY4z41K8Ad2NzmP4w4VumGMGmh-Vq3DfAi_nwdyWXA/?imgmax=800" border="0"></a></p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-76717774058500447322018-08-04T12:04:00.001+01:002018-08-04T12:04:13.260+01:00How to check that AWS Greengrass is running on a Raspberry Pi<p>This post refers to a Raspberry Pi 3 B+ running Raspbian Stretch.</p><p>To check that AWS Greengrass is running on the device run the following command:</p><pre class="brush: bash">ps aux | grep -E 'greengrass.*daemon'</pre><p><a href="https://lh3.googleusercontent.com/-EFA_aTGmz_E/W2WIKnDkjnI/AAAAAAAADyE/n9zTOgHTEqswPZfd2VlyDfYAOmKqCxvfQCHMYCw/s1600-h/image%255B4%255D"><img width="791" height="224" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-i2Fb8C3elec/W2WILP7cpbI/AAAAAAAADyI/8vk8snyINvg4WkYbxTf7hM8MosRLXa25wCHMYCw/image_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>A quick reminder of Linux commands.</p><p>The <em>ps</em> command displays status information about active processes. The ‘aux’ options are as follows:</p><p>a = show status information for all processes that any terminal controls<br>u = display user-oriented status information<br>x = include information about processes with no controlling terminal (e.g. daemons)<p>The <em>grep</em> command searches for patterns in files. The –E option indicates that the given PATTERN – ‘greengrass.*daemon’ in this case - is an extended regular expression (ERE).Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-5503386388412791912018-08-03T12:16:00.001+01:002018-08-03T12:16:37.342+01:00Automatically starting AWS Greengrass on a Raspberry Pi on system boot<p>This post covers the steps necessary to get AWS Greengrass to start at system boot on a Raspberry Pi 3+ running Raspbian Stretch. The Greengrass software was at version 1.6.0.</p><p>I don’t cover the Greengrass installation or configuration process here. It is assumed that has already been done. Refer to <a href="https://docs.aws.amazon.com/greengrass/latest/developerguide/gg-gs.html">this tutorial</a> for details.</p><p>What we are going to do here is use systemd to run Greengrass on system boot.</p><h2>Step 1</h2><p>Navigate to the systemd/system folder on the Raspberry Pi.</p><pre class="brush: bash">cd /etc/systemd/system/</pre><p><a href="https://lh3.googleusercontent.com/-OwKvK81fZ-E/W2Q5cYRZvVI/AAAAAAAADwc/HVUshtybqygIQLJBfPvzB7gpC3NlTBy8wCHMYCw/s1600-h/SNAGHTML32f6852%255B5%255D"><img width="572" height="226" title="SNAGHTML32f6852" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML32f6852" src="https://lh3.googleusercontent.com/-8Quim55JrXQ/W2Q5dLC0tmI/AAAAAAAADwg/quUCmLcTZUwJSWircR6CpcDQ5w7FMuEmgCHMYCw/SNAGHTML32f6852_thumb%255B2%255D?imgmax=800" border="0"></a></p><h2>Step 2</h2><p>Create a file called greengrass.service in the systemd/system folder using the nano text editor.</p><pre class="brush: bash">sudo nano greengrass.service</pre><p>Copy in to the file the contents described in <a href="https://docs.aws.amazon.com/greengrass/latest/developerguide/gg-core.html#start-on-boot">this document</a>.</p><p><a href="https://lh3.googleusercontent.com/-mRayekOIw64/W2Q5dtgaJkI/AAAAAAAADwk/1FbSMVU0Wpwy1ky3xV4caLeOLVUoHUPIwCHMYCw/s1600-h/image%255B4%255D"><img width="761" height="574" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-kPQMAWDccS8/W2Q5eGYAkqI/AAAAAAAADwo/Q-NTOJGVFqs8JvAoHaYHUVVS0Zi7KfUNgCHMYCw/image_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>Save the file.</p><p><a href="https://lh3.googleusercontent.com/-IVtbatUNGcU/W2Q5exgEa0I/AAAAAAAADws/7R17DdzydSoI86BtJrxj6YKgyIBx3hqRACHMYCw/s1600-h/image%255B9%255D"><img width="761" height="574" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-oMRhmYLoQic/W2Q5fej6tgI/AAAAAAAADww/hjB0xdwLkIEQ3NObyUC56GY4yKngoTqtwCHMYCw/image_thumb%255B5%255D?imgmax=800" border="0"></a></p><h2>Step 3</h2><p>Change the permissions on the file so they are executable by root.</p><pre class="brush: bash">sudo chmod u+rwx /etc/systemd/system/greengrass.service</pre><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_wav40-N9kdGVR1kfWAK8YCzm8v3jDdNDcuklIQAM7UIS8kTI5JBNKGy_pjIyG3zILboihZORAZ3txd5ci5ehJif7T_X7eCHH43xeXdCnzQ4OodwPvu_X_wPWYxJKYYitwjzLrl0j96o/s1600-h/image%255B14%255D"><img width="761" height="574" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-8wxnESnNOYI/W2Q5gd77UcI/AAAAAAAADw4/t4KqjOVjlQEqwo2irzIJVpBfeTnQAZ8fwCHMYCw/image_thumb%255B8%255D?imgmax=800" border="0"></a></p><h2>Step 4</h2><p>Enable the service.</p><pre class="brush: bash">sudo systemctl enable greengrass</pre><p><a href="https://lh3.googleusercontent.com/-GXOpqyAx8lY/W2Q5hEhbh1I/AAAAAAAADw8/_am7tTZj8BAmY6lJq_cp62vyVF-KjyTnQCHMYCw/s1600-h/SNAGHTML33471df%255B5%255D"><img width="776" height="217" title="SNAGHTML33471df" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML33471df" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjaZDKCE6Ze-Sfa8lkx3Q-QFFGlT_AhBFgkODO8V6nmb-chU2fKy-1K1K9DqNjKyoeX63_4p5yL5j51uqDv6Ju9HXaRctBEM4daoQubfkHlNlj0KPQLSYP0HLjHHZcGuQiDqgVNMJy2ous/?imgmax=800" border="0"></a></p><h2>Step 5</h2><p>You can now start the Greengrass service.</p><pre class="brush: bash">sudo systemctl start greengrass</pre><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhF5mW9ZNezprTVBoG4Z5YBhfL4Inua4mqpTCBhPe6JG5EXgVT6CqS4hfLCJNhL6XCj-hLx2A2c8wQsnwgyZevvCmcrf0vJAZiIx09ootptrOz46CRYakll-XMhX8MPkIh3r9_MX3EFfbM/s1600-h/SNAGHTML335429d%255B5%255D"><img width="866" height="214" title="SNAGHTML335429d" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML335429d" src="https://lh3.googleusercontent.com/-HqYtd_niziY/W2Q5innK-NI/AAAAAAAADxI/3iKd7rLZQ4A9-Hr6skwbbOsm1PunpdudACHMYCw/SNAGHTML335429d_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>You can check that Greengrass is running.</p><pre class="brush: bash">ps –ef | grep green</pre><p><a href="https://lh3.googleusercontent.com/-AXgzmJ31DVg/W2Q5jRAGjUI/AAAAAAAADxM/xvusxAF-KfA1rH-HDHGHO_9yBrQr-6mlgCHMYCw/s1600-h/SNAGHTML3364160%255B5%255D"><img width="1546" height="281" title="SNAGHTML3364160" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML3364160" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigKFwLSMCLhoZMKl2VUIOJI6buLExrMrzoG_irFleBO5LAdCaggeniPD38tOYp8dLbMzScPb5fDvut-_tvBga0IhR4XGGXRr5IANA-0nyUB-CjKRfChIFeAQNl87pDdWxbl4UdMLOEl2Q/?imgmax=800" border="0"></a></p><p>Reboot the system and check that Greengrass started after a reboot.</p><p><a href="https://lh3.googleusercontent.com/-R3k0zueLFeg/W2Q5kpT1J4I/AAAAAAAADxU/WVwZM77Ab5Uses6uB66zrSn3Q1YaccbwQCHMYCw/s1600-h/SNAGHTML3372dd3%255B5%255D"><img width="1564" height="528" title="SNAGHTML3372dd3" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML3372dd3" src="https://lh3.googleusercontent.com/-Cvr8_YjvgQg/W2Q5lNauijI/AAAAAAAADxY/5maySB48rRE4YJH5gqiYSZ8cW5cnZqVUgCHMYCw/SNAGHTML3372dd3_thumb%255B2%255D?imgmax=800" border="0"></a></p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-7571310688904793762018-07-03T10:43:00.001+01:002018-07-03T22:33:56.131+01:00Preparing a Raspberry Pi for AWS Greengrass<p>This article refers to a Raspberry Pi 3 B+. What follows are just some notes taken by me as I progressed through the steps described here:</p><p><a title="https://docs.aws.amazon.com/greengrass/latest/developerguide/module1.html" href="https://docs.aws.amazon.com/greengrass/latest/developerguide/module1.html">https://docs.aws.amazon.com/greengrass/latest/developerguide/module1.html</a></p><p>For details of the process please refer to the document above.</p><p>One issue I did encounter was when running the Greengrass dependency checker. On my Raspberry Pi I struggled to get the memory cgroup configured correctly. The solution is included below (see Step 5).</p><h2>Step 1</h2><p>Initial setup of the Raspberry Pi and access via SSH was simply a normal setup process. Once connected I needed to start the first steps specific to AWS Greengrass starting with adding users.</p><h2>Step 2</h2><p>Basically this is Module 1: Step 9 in the document linked to above.</p><p><a href="https://lh3.googleusercontent.com/-A3_dSiSZRh4/WztE9-dc7vI/AAAAAAAADtg/V2hcEjQCbUgfaaA6rIB35_-s5jTClbQYQCHMYCw/s1600-h/SNAGHTML850b3e0%255B5%255D"><img width="882" height="665" title="SNAGHTML850b3e0" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML850b3e0" src="https://lh3.googleusercontent.com/-95lGIhWKw-8/WztE-Y1fD7I/AAAAAAAADtk/KyCZ8a4SsG4LaIemuSYftGEjrAyH2A-bQCHMYCw/SNAGHTML850b3e0_thumb%255B2%255D?imgmax=800" border="0"></a></p><h2>Step 3</h2><p>Module 1: item10 calls for an upgrade to the Linux kernel. <font color="#ff0000">I chose to ignore this step for now.</font> It will be interesting to see if there are any issues.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpKX34XWYlKi0g-sobq0aUGLa8IYjJobnBkOlKTq_w-lod_f2lHzR573wbMX0QduzZl7hcwbPH05fdVyfrkv1pvXBTWJi5saT0QeS4yNnFIFWB2sNOlYUU_jKE58JKoVNGqtCqbUqwUmI/s1600-h/SNAGHTML852ee7e%255B5%255D"><img width="924" height="394" title="SNAGHTML852ee7e" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML852ee7e" src="https://lh3.googleusercontent.com/-p9R8uPaNZCc/WztE_rADOPI/AAAAAAAADts/oxKkD0P1LS0k9-LiVvyqBfTrf-IRApkiACHMYCw/SNAGHTML852ee7e_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>The kernel version of my OS was 4.14.50 although the Greengrass instructions suggest 4.9.30.</p><h2>Step 4</h2><p>Module 1: item11 is locking down security. No real issues encountered. </p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimeP5JxAORN6VyRW3D5ZsXLdsPOrzT8E3AuVQX38S8d5ZohuQ13sVhni25zSe1xW-VM1RRq24t27vZXBmKlsY-BTM3oraVg1tzQD-PXKBw-2XCqLoZqLpOG-v_EDGuaRnlg87uUksAbqU/s1600-h/SNAGHTML8556fab%255B5%255D"><img width="672" height="303" title="SNAGHTML8556fab" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML8556fab" src="https://lh3.googleusercontent.com/-ZUGfP-Q63aU/WztFARv0WYI/AAAAAAAADt0/jFhaQlx1UskeRuMpac-9-ZlSg3h3GxRUgCHMYCw/SNAGHTML8556fab_thumb%255B2%255D?imgmax=800" border="0"></a></p><p><a href="https://lh3.googleusercontent.com/-FIzav4YuVzQ/WztFBL3leAI/AAAAAAAADt4/Vxx7-RAB4koBwGYjXY-N9Bh8lMsJeCyGACHMYCw/s1600-h/image%255B4%255D"><img width="531" height="340" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisIk46bMeT-FwbRtick0_Eti9-47RZpBmXiQhpmV54oeZgDtmco0iKhgTuDTD1WXozR95N3ByK-an6AGVVLRj-L9F_nouJhx-myfQVYXgTi3d8iJ97G4PUjwIH_BwkFTW6DDL6qAprIN8/?imgmax=800" border="0"></a></p><p><a href="https://lh3.googleusercontent.com/-dEOFtGIQ5xw/WztFCKRRf2I/AAAAAAAADuA/KqCojtK9ck4WN8hWZR9NzTLgE39iDyWlgCHMYCw/s1600-h/SNAGHTML856523b%255B5%255D"><img width="763" height="288" title="SNAGHTML856523b" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML856523b" src="https://lh3.googleusercontent.com/-tOCA-Y3XgUI/WztFCR_Zk0I/AAAAAAAADuE/StsI0aOI-rEfWdZs6zpl0p4niWPBtqBKwCHMYCw/SNAGHTML856523b_thumb%255B2%255D?imgmax=800" border="0"></a></p><h2>Step 5</h2><p>So now I was at Module 1: item 12 and ready to check dependencies. This was where the only significant issue was encountered. The initial steps all progressed well until I ran the AWS Greengrass dependency checker. This showed an issue with the memory cgroup dependency.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIXTAd3rSxVFDN6QNFK271bhL6hUMi0N1GVQHPFurXb2ilFuFFDUO-zEP10NrzB0FUpluMRv_lqQp6DldAQxGy2qIpLbbskzgMBAqsklVAIV_sSDJm7KDQ_se590ncmJNxOYSjeLFzpDM/s1600-h/SNAGHTML858dbd3%255B5%255D"><img width="1285" height="546" title="SNAGHTML858dbd3" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML858dbd3" src="https://lh3.googleusercontent.com/-trny-cZYDlU/WztFDjpL-oI/AAAAAAAADuM/9XHzrg_LBeoFVKmP8eYFWWz8jRiiEjIewCHMYCw/SNAGHTML858dbd3_thumb%255B2%255D?imgmax=800" border="0"></a></p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoEBh2fcvr5SkO4VEmzmDRwba5vEixxTdL_A2lUP0-OmLaeqwFmHqd5y-JB4xcdNpAwes0VDMhighfvtmKA4DB_mlBTZBY2n1pXbNYcZ9ph1xBujI4ITUgJEEVRbVTbHRhisPRY4SYrXs/s1600-h/SNAGHTML859e40c%255B5%255D"><img width="1349" height="746" title="SNAGHTML859e40c" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML859e40c" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzCc0DAtuhkFiyztfXSBK7J3p_IRPZkXmH3LGeOyYfRw9I0Yoo1bk36bLVTmiPDjnXLtS-s_ch5IaQwrtVLywBzMIa5D_-SCxOE8ItwzoGapRsd177HGUWGrEA2WZiZdm3vLYQY_72vzE/?imgmax=800" border="0"></a></p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeW0QG19kfHYCyt6vi3veLjrVlWh2ZK1VI50_e6Fi1Fw0ek03MIHFAR_1TaZYLCqH_b5SA4lNPYROSFZZirW0pPQI_Znyz5YmuTPxWh0wV5_iMrQatZvEVCPiLP0YJTZi4jTc7SfRKekM/s1600-h/SNAGHTML85b4a34%255B5%255D"><img width="949" height="613" title="SNAGHTML85b4a34" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML85b4a34" src="https://lh3.googleusercontent.com/-doEBHYLAMSo/WztFF-a8R1I/AAAAAAAADuc/yz1rnaCc1Owj1PE8NNu3MrFIeqeyYpDVwCHMYCw/SNAGHTML85b4a34_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>The dependency checker showed the following message regarding a missing required dependency:</p><blockquote><p>1. The ‘memory’ cgroup is not enabled on the device.<br>Greengrass will fail to set the memory limit of user lambdas.</p></blockquote><p>For details about cgroups refer to the following document (although not specific to Raspian the information should still apply):</p><p><a title="https://sysadmincasts.com/episodes/14-introduction-to-linux-control-groups-cgroups" href="https://sysadmincasts.com/episodes/14-introduction-to-linux-control-groups-cgroups">https://sysadmincasts.com/episodes/14-introduction-to-linux-control-groups-cgroups</a></p><h3>Solution</h3><p>Running “cat /proc/cgroups” initially showed that memory <em>subsys_name</em> was not enabled (set to 0). So, I edited the “cmdline.txt” file located in “/boot” with the nano text editor.</p><p><a href="https://lh3.googleusercontent.com/-DR3HjFWZ1IU/WztFGfR8ALI/AAAAAAAADug/ZjTRWhlk0zMq5P89LT6reQHDJ9bdhTvtQCHMYCw/s1600-h/SNAGHTML8696d58%255B5%255D"><img width="836" height="676" title="SNAGHTML8696d58" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML8696d58" src="https://lh3.googleusercontent.com/-No_vYuDGDMY/WztFG8DGJZI/AAAAAAAADuk/6fLZny9hiYoIUL_zRH657h4igl7t5RG4wCHMYCw/SNAGHTML8696d58_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>I added the following items to the line in that file:</p><pre class="brush: text">cgroup_memory=1 cgroup_enable=memory</pre><p><strong>NB:</strong> Both <em>cgroup_memory</em> and <em>cgroup_enable</em> were required to make this work. </p><p>The total line from my cmdline.txt file ended up looking like this:</p><pre class="brush: text">dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 cgroup_memory=1 cgroup_enable=memory root=PARTUUID=c20ec4c3-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait</pre><p>I did a reboot and checked /proc/cgroups to see if the change had taken effect. It had with the enabled flag set to 1.</p><p><a href="https://lh3.googleusercontent.com/-d7lw3GFFj1o/WztFHbS3UOI/AAAAAAAADuo/d_pQ5bRGquckbCff3kM77aJ15sNyI9VUwCHMYCw/s1600-h/SNAGHTML86e9941%255B5%255D"><img width="643" height="353" title="SNAGHTML86e9941" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML86e9941" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgC2Y94wfhCZSdat79FVfiqSB0VPXu2_EbGULJaK9gH7GDuTCLmq8-_mUEKrv8HXHiYQa2xeHekrpA06RIw_NfHkdeHH5D1kORiJQoNZ84o8zxhsSmkmCDMn5VWqpHOLZKBnc342VbA6mQ/?imgmax=800" border="0"></a></p><p>Time to recheck the AWS Greengrass dependencies.</p><p><a href="https://lh3.googleusercontent.com/-gq7hR-sanlk/WztFIbohgdI/AAAAAAAADuw/RrYTbfgbl2YVoyBWRbiFrt-mIVh36MX0QCHMYCw/s1600-h/SNAGHTML86fcc24%255B5%255D"><img width="981" height="484" title="SNAGHTML86fcc24" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML86fcc24" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSm-ApL5iiUHqDzT0Z6iBG6Hp2zynDCEjW-hLHQT4AvsP_-CB8Lj_ce0yDsHVuR5ZVxZD8fkIWPADZi0QdAg5RUgxEdq9SucgUHbBPp2P1xbLG8AES6wER6_GKwpHBas1MgOIswKZSaBg/?imgmax=800" border="0"></a></p><p>No issues this time. </p><p>I did however note the following message:</p><blockquote><p><u>Note :<br></u>1. It looks like the kernel uses ‘systemd’ as the init process. Be sure to set the ‘useSystemd’ field in the file ‘config.json’ to ‘yes’ when configuring Greengrass core.</p></blockquote><p><strong>Note to self:</strong> Don’t forget to do that!<br></p><p>This left me ready to install the Greengrass core software:</p><p><a title="https://docs.aws.amazon.com/greengrass/latest/developerguide/module2.html" href="https://docs.aws.amazon.com/greengrass/latest/developerguide/module2.html">https://docs.aws.amazon.com/greengrass/latest/developerguide/module2.html</a></p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-48176820409211937932018-07-01T15:10:00.001+01:002018-07-01T15:10:41.790+01:00Mount a network drive for CrashPlan<p>I was having issues with getting CrashPlan to backup to network storage (a Western Digital MyBookLive). In short, the drive was not always mapped. I fixed it using advice given in this article:</p><p><a title="https://support.code42.com/CrashPlan/4/Backup/Back_up_files_from_a_Windows_network_drive" href="https://support.code42.com/CrashPlan/4/Backup/Back_up_files_from_a_Windows_network_drive">https://support.code42.com/CrashPlan/4/Backup/Back_up_files_from_a_Windows_network_drive</a></p><p>The batch file looked like this:</p><pre class="brush: text">net use Z: /DELETE
net use Z: "\\192.168.0.13\Andy" "password here" /USER:"username here" >>E:\mount_drive_for_crashplan.log</pre><p>And I created a scheduled task to run it as instructed in the article.</p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-31192512172370002552018-06-29T21:24:00.001+01:002019-07-27T16:56:45.903+01:00Installing Plex media server on a Raspberry Pi<p>This post was covers installing Plex media server on a Raspberry Pi 3 B+ running Raspbian Stretch Lite.</p><p>In this case I had already attached an external drive and set up Samba so I could easily add media files to the drive from my Windows PC. See <a href="http://www.andyfrench.info/2018/06/attaching-external-hard-drive-to.html">this post</a> for details.</p><h2>Step 1</h2><p>Firstly I added a new repository to apt so I could install it using apt-get. To do this I needed to get access to the dev2day.de repository. </p><p><a href="https://lh3.googleusercontent.com/-44bsQPpxPfQ/WzaVTOwcjFI/AAAAAAAADsY/NNgA9Ig7MDkWii-Aawoly2NkachGlvDYgCHMYCw/s1600-h/SNAGHTML13bbc6e%255B5%255D"><img width="560" height="403" title="SNAGHTML13bbc6e" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML13bbc6e" src="https://lh3.googleusercontent.com/-VU4aFqf0-L8/WzaVTlh2uPI/AAAAAAAADsc/dKW1jkkF2tIkeWU5wuIytDCrsCiZzYYHACHMYCw/SNAGHTML13bbc6e_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>First step was to download the key and add it to apt. I switched to be su for this. The commands below show what was run but not any of the resulting output.</p><pre class="brush: bash">sudo su<br>wget -q https://downloads.plex.tv/plex-keys/PlexSign.key -O - | sudo apt-key add -<br>exit</pre><h2>Step 2</h2><p>Then I created a new sources file for Plex.</p><pre class="brush: bash">cd /etc/apt/sources.list.d<br>sudo nano plexmediaserver.list<br></pre><p>I then added the following line to the file and saved it.<br></p>
<pre class="brush: text">deb https://downloads.plex.tv/repo/deb/ public main</pre><p>Note the version of Raspbian is Stretch. Modify the command for different versions.</p><p><a href="https://lh3.googleusercontent.com/-8m8g-Q42Aiw/WzaVUMqXA9I/AAAAAAAADsg/TrBfHz1g8Gk-H02hB7ZPfvPvEYKB89yZQCHMYCw/s1600-h/image%255B4%255D"><img width="711" height="304" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-DRxPzKl7Mt8/WzaVU6Rw_PI/AAAAAAAADsk/ovTlrqDxtjwRUB_wI8oGjizDkZhFX-XhwCHMYCw/image_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>Then I updated apt-get so it has the latest package lists.</p><pre class="brush: bash">sudo apt-get update</pre><p><br></p><h2>Step 3</h2><p>Now I could install Plex.</p><pre class="brush: bash">sudo apt-get install plexmediaserver-installer</pre><h2>Step 4</h2><p>I wanted to move the Plex database from the SD card storage in the Raspberry Pi to the external drive.</p><p>To do that stopped Plex before I moved the Plex library folder from its original location to a new location on the external drive. I then created a symbolic link to in place of the original folder that pointed to the new location. Once that had been done I could restart Plex. Plex would still look for its library in the original location but be redirected by the symbolic link.</p><pre class="brush: bash">sudo service plexmediaserver stop<br>sudo mv /var/lib/plexmediaserver /media/seagateHDD/plexmediaserver/<br>sudo service plexmediaserver start<br><br></pre><p><br></p><p><a href="https://lh3.googleusercontent.com/-3BnsCRTveuA/WzaVWzgxX7I/AAAAAAAADso/uq_XPYkobOAgUS0Pq8vQX2t59vnF-ZDsQCHMYCw/s1600-h/image%255B14%255D"><img width="901" height="853" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-xlbAxztoptE/WzaVXt9ct8I/AAAAAAAADss/0i4Vn2sW7Fg4baC4lm7U0VCzMEvpHPDiACHMYCw/image_thumb%255B8%255D?imgmax=800" border="0"></a></p><h2>Step 5</h2><p>Then it was just a case of accessing Plex from a browser on my PC to check it was working. It was! I then started creating new libraries in Plex. The seagateHDD showed up nicely, along with the Media folder containing my video files.</p><p>The Plex server was available at <a title="http://192.168.0.20:32400/web/" href="http://192.168.0.20:32400/web/">http://192.168.0.20:32400/web/</a>.</p><p><a href="https://lh3.googleusercontent.com/-gKHIohxJiVc/WzaVYTn4CTI/AAAAAAAADsw/wz4EnT93IvYgipeq0fjFA8VPBNZgTIBbgCHMYCw/s1600-h/SNAGHTML15362b0%255B5%255D"><img width="1074" height="718" title="SNAGHTML15362b0" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML15362b0" src="https://lh3.googleusercontent.com/-LlqWN6smc-4/WzaVYwnO6yI/AAAAAAAADs0/AqT0lxCmMT0DNVMY-1pa6XHKwYNDB97zwCHMYCw/SNAGHTML15362b0_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>Job done.</p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-26338675725798566812018-06-29T20:18:00.001+01:002018-06-29T20:46:15.399+01:00Attaching an external hard drive to a Raspberry Pi<p>This post was covers installing an external USB hard drive to a Raspberry Pi 3 B+ running Raspbian Stretch Lite.</p><p>Firstly, I had terrible trouble getting my <a href="https://smile.amazon.co.uk/dp/B00UNA1O3Y/ref=pe_3187911_189395841_TE_dp_1">Seagate Expansion 2 TB USB 3.0 Desktop 3.5 Inch External Hard Drive</a> to work correctly. Endless permission issues, problems with Samba, you name it. </p><p><strong>The key to solving these issues was to install the <a href="https://www.tuxera.com/community/open-source-ntfs-3g/">NTFS-3G driver</a> rather than using the standard NTFS driver when mounting the drive.</strong> I’ll cover that as I go in the steps described below.</p><h2>Step 1</h2><p>I started with the Raspberry Pi shutdown and simply attached the drive to a vacant USB port on the Pi. I the powered up the drive and then the Pi.</p><h2>Step 2</h2><p>SSH to the Raspberry Pi as usual. I then ran the following command to see what drives were now attached.</p><pre class="brush: bash">sudo blkid</pre><p><br></p><p><a href="https://lh3.googleusercontent.com/-BEtakg9M5sg/WzaF-DH3J4I/AAAAAAAADqw/MbBVOFUAO0w2DhvMVB_r31aC__uPjrPZwCHMYCw/s1600-h/image%255B4%255D"><img width="821" height="466" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-k4rNUyM5Vyk/WzaF-tqr-dI/AAAAAAAADq0/p2zayF-1IeY9IXEZ1CCxQ2ggvb2DZ9SuACHMYCw/image_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>I looked for the new Seagate drive which in this case was /dev/sda2. I made a note of the information, especially the UUID which I used later.</p><h2>Step 3</h2><p>So, I’m skipping all the trial and error here but the next significant thing to do is install the NTFS-3G driver using apt-get.</p><pre class="brush: bash">sudo apt-get install ntfs-3g</pre><p><br></p><p><a href="https://lh3.googleusercontent.com/-JeXGIOU27lo/WzaF_KRG4_I/AAAAAAAADq4/GR1baMSeM9M0u4tyfXwrCxjbKfMvLCRbgCHMYCw/s1600-h/image%255B9%255D"><img width="1001" height="466" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-PLpBEJHiocg/WzaF_4yP5fI/AAAAAAAADq8/dRBeOWxeBgcXvEor5kyEWSkV8Zg_VtemQCHMYCw/image_thumb%255B5%255D?imgmax=800" border="0"></a></p><h2>Step 4</h2><p>Time to mount the drive on the file system. I chose to mount the drive under /media rather than /mnt or any other location. So, I created a folder specifically for the drive (<font face="Courier New">/media/seagateHDD</font>) then mounted the drive to that folder.</p><pre class="brush: bash">cd /media<br>mkdir seagateHDD<br>sudo mount /dev/sda2 /media/seagateHDD/ -t ntfs-3g</pre><p> <strong>NB:</strong> Note the use of the <font face="Courier New">–t ntfs-3g</font> option.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhImq_qcwl8hhqb_hEcQDxXP5AdaeEEwuGqk1nkMSNHtSla5YHF2KJEddq-Hwqzpi1VuOFPUM1oQAl5Le04mOqDcJb6lTJcdnYW0PpvuKtV7StXm-wSP6qm8b7ZtpEQ0DIFui-_CF5E2kw/s1600-h/image%255B14%255D"><img width="637" height="311" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-86_o-ZRnKIM/WzaGA_p9xkI/AAAAAAAADrE/SvIKlR8Iy1UGij2j49vQympH-RVKsHeuwCHMYCw/image_thumb%255B8%255D?imgmax=800" border="0"></a></p><p>This proved the drive could be mounted and that it worked. As you can see permissions are wide open.</p><h2>Step 5</h2><p>Now we need to set up the system to reconnect the drive at start-up. For this I modified the fstab file.</p><pre class="brush: bash">sudo nano /etc/fstab</pre><p><br></p><p><a href="https://lh3.googleusercontent.com/-Pd9F1Q-5EqA/WzaLTxgsRzI/AAAAAAAADsE/GGcLjw5pIXMlK6tEG4z6cH903D5L-Cg_wCHMYCw/s1600-h/SNAGHTMLdb69cc%255B8%255D"><img width="668" height="157" title="SNAGHTMLdb69cc" style="margin: 0px; border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTMLdb69cc" src="https://lh3.googleusercontent.com/-bAuW_cEjIOw/WzaLUbGJXfI/AAAAAAAADsI/wMrBZ1YPcEo37CTKbOpmKboRI4DM8w84wCHMYCw/SNAGHTMLdb69cc_thumb%255B3%255D?imgmax=800" border="0"></a> </p><p>And added the following line. Note the use of the UUID rather than /dev/sda2. This helps to ensure the same drive gets reattached just in case the device changes. </p><pre class="brush: bash">UUID=FC82A10F82A0D006 /media/seagateHDD ntfs-3g defaults 0 0</pre><p><br></p><p><a href="https://lh3.googleusercontent.com/-4jkr2KjDOBY/WzaGCbbwd5I/AAAAAAAADrQ/X7ONvnjqiisxfGHoYc5bKCMeBhapIrIcgCHMYCw/s1600-h/image%255B19%255D"><img width="851" height="286" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEge-7i5k07qYLrTrydsVV701eC0fG5zpJWpJ_YuyY1bV4B4wr8hl6I88s9d3hvyJ_4Rhvx5YBC-690-mCZdZXdnik_h70pUk8sDCqzQkKd8-ofm_2wg_lrkcvkTu_jo7M687rXvAxH4Y1I/?imgmax=800" border="0"></a></p><h2>Step 6</h2><p>Time to install Samba. Firstly I installed Samba using apt-get.</p><pre class="brush: bash">sudo apt-get install samba samba-common-bin</pre><p><br></p><p><a href="https://lh3.googleusercontent.com/-WFAZH6jfvtk/WzaGD9QHhhI/AAAAAAAADrY/gHrZ5fwR5MQC-9fCGMCc8rtJ-1dccz7_gCHMYCw/s1600-h/image%255B24%255D"><img width="821" height="466" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-5AccVrWQr-Y/WzaGEczNVWI/AAAAAAAADrc/tD-0G_mw8x0JX2qSYL20_dNFfFlQ7SzIQCHMYCw/image_thumb%255B14%255D?imgmax=800" border="0"></a></p><p>When that was done I edited the samba configuration file.</p><pre class="brush: bash">sudo nano /etc/samba/smb.conf</pre><p>And added the following section.</p><pre class="brush: text">[media]<br> writeable = yes<br> public = yes<br> directory mode = 0777<br> path = /media/seagateHDD/Media<br> comment = Pi shared media folder<br> create mode = 0777</pre><p>Note that there was an existing folder called Media on the drive. I chose to make that folder accessible via Samba.</p><p><a href="https://lh3.googleusercontent.com/-DOYeuzh7DYw/WzaGFOh9alI/AAAAAAAADrg/9mbxgDOAv6UeXjmqw3qbvapGG8NyCbycACHMYCw/s1600-h/image%255B29%255D"><img width="1041" height="610" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1z5nYyGQVByWsk0-NAIc2VhswD4FO3oodw1KwaJUzwjEmoGOwRYdVWGtuw1lUUfbKndzN70KFuS1jK1WmSQxPW_EwbefSeUeWxqwn8t9W4HV1yHR0agV5guRC-cAM39RehekaOGFVugQ/?imgmax=800" border="0"></a></p><p>The a quick restart of Samba to read the new configuration.</p><pre class="brush: bash">sudo /etc/init.d/samba restart</pre><h2>Step 7</h2><p>Test from Windows. I just added a Media Location mapped to my Raspberry Pi’s IP address and the media share and that was it!</p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-6959987993219542712018-06-26T20:30:00.001+01:002018-06-26T20:35:20.556+01:00Quick headless setup of a Raspberry Pi 3<p>Here are the steps taken to get a Raspberry Pi 3 B+ up-and-running on my home network but doing so headless – no monitor etc. attached.</p><h2>Step 1</h2><p>Follow the basic <a href="https://www.raspberrypi.org/documentation/installation/installing-images/README.md">installation guide</a> from raspberrypi.org to flash a micro SD card. I used the Rasbian Stretch Lite image and Etcher to flash the image onto the SD card.</p><p><a href="https://lh3.googleusercontent.com/-8KC_JqSHbk8/WzKVdeXRU7I/AAAAAAAADqU/UZ4xswjrPV44qGq9gk-7wQa_Ul5BeXWfwCHMYCw/s1600-h/image%255B14%255D"><img width="640" height="325" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-Vfl7YE7rm6Q/WzKVd-imq_I/AAAAAAAADqY/d8pst1AQ5dQQ-0S1Z9fZ9AGjbZRH-MYdQCHMYCw/image_thumb%255B8%255D?imgmax=800" border="0"></a></p><h2>Step 2 </h2><p>Create a file called ssh (no file extension) in the root of the newly created boot SD card. This enables SSH when the Raspberry Pi starts up.</p><p>The file doesn’t need any contents. Just the presence of the file enables SSH connections to the Pi.</p><h2>Step 3</h2><p>Put the SD card in the Raspberry Pi, connect it to the network via ethernet and power it up.</p><h2>Step 4</h2><p>Access to your router management console and find the Raspberry Pi as a connected device. Note down the IP address.</p><p>If you can, use DHCP management tools to reserve the IP address so it won’t change (this makes it easier to reconnect to the Pi if you have to bounce your router).</p><h2>Step 5</h2><p>Use <a href="https://www.putty.org/">Putty</a> or similar tool to SSH on to the Pi using the IP address from Step 4. Login as the ‘pi’ user (default password is ‘raspberry’ with no quotes).</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk-eTEHDT3LfnUi_SeFXvRMUHWT2LleH-zaX-QuLC05hV3ufAYFZ2ZiFBPxjMFjo9EWeDUWwKp_-QaQ-0AI_wKgaiugv6DgAVNndQxkjNwPDJYqUmRDMdcxuu1IFynDo0P2x7qusfoKXg/s1600-h/image%255B4%255D"><img width="717" height="466" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-xJtKR-HoVjg/WzKUXud1-hI/AAAAAAAADp4/QsomnJYL-XA2FyvfAuomGqFpwDQZkSj4QCHMYCw/image_thumb%255B2%255D?imgmax=800" border="0"></a></p><h2>Step 6</h2><p>Run the following command:</p><pre class="brush: bash">sudo raspi-config</pre><p>This fires up the Rasperry Pi configuration tool. Make any changes you want to (e.g. enabling wi-fi or changing the host name). Change the default password if nothing else.</p><p><a href="https://lh3.googleusercontent.com/--teolf_eBdo/WzKUYA6ntCI/AAAAAAAADp8/iOwfXVdkZzAhzxQxBPPsxyJf7vtNhEAfQCHMYCw/s1600-h/image%255B9%255D"><img width="717" height="466" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-f-Uvwg0B34E/WzKUYpUaIQI/AAAAAAAADqA/EYe8XwE0oGkpAg8umG_VBtcZ87SLeoQeACHMYCw/image_thumb%255B5%255D?imgmax=800" border="0"></a></p><h2>Step 7</h2><p>Run the following command:</p><pre class="brush: bash">sudo apt-get update</pre><p>And then this one:</p><pre class="brush: bash">sudo apt-get upgrade</pre><p>You’re done. Raspberry Pi is up-and-running.</p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-21837337951242404052018-03-01T11:54:00.001+00:002018-03-01T11:55:50.182+00:00Suspicious Windows 10 Printer Update?<p>I’ve been seeing this in my Windows 10 update after receiving a notification that it failed to install:</p><blockquote><p>Canon - Printer - 4/21/2000 12:00:00 AM - 10.0.17046.1000<br><strong>Status:</strong> Awaiting install</p></blockquote><p><br></p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfR-RJuiUFg6rUVXICwiKYQiMPap8_xAE-X9CAmh5TJCVN9rO2nqlPWsfwkqYht8a9vxcVmUdgbA9hKKlSSo3Yn1X3VFJyc-xWRK7jglBxPcBcGl-wFQz7Mk7S1tsdOILz99eV7mofzyk/s1600-h/SNAGHTML49d7fd%255B5%255D"><img width="790" height="537" title="SNAGHTML49d7fd" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML49d7fd" src="https://lh3.googleusercontent.com/-aN3KvGJTR_o/WpfqCKzKp0I/AAAAAAAADis/SjNzy2MSvwsn00r79Wj-pwNiShpqA9MKgCHMYCw/SNAGHTML49d7fd_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>It seems I’m not alone in spotting this rather odd and somewhat suspicious issue:</p><p>See:</p><ul><li><a href="http://searchenterprisedesktop.techtarget.com/blog/Windows-Enterprise-Desktop/Bogus-Win10-Printer-Update-Alerts">Bogus Win10 Printer Update Alerts</a></li><li><a title="https://www.reddit.com/r/techsupport/comments/7uqj49/cant_install_update_canonprinter_update_from/" href="https://www.reddit.com/r/techsupport/comments/7uqj49/cant_install_update_canonprinter_update_from/">https://www.reddit.com/r/techsupport/comments/7uqj49/cant_install_update_canonprinter_update_from/</a></li><li><a title="https://www.tenforums.com/drivers-hardware/97218-cant-get-rid-printer-6-21-2006-12-00-00-am-10-0-15063-0-update.html" href="https://www.tenforums.com/drivers-hardware/97218-cant-get-rid-printer-6-21-2006-12-00-00-am-10-0-15063-0-update.html">https://www.tenforums.com/drivers-hardware/97218-cant-get-rid-printer-6-21-2006-12-00-00-am-10-0-15063-0-update.html</a></li></ul><p><br></p><p>For now I am trying the “Show or hide updates” troubleshooter package from Microsoft which you can find here:</p><ul><li><a href="https://support.microsoft.com/en-gb/help/3183922/how-to-temporarily-prevent-a-windows-update-from-reinstalling-in-windo">How to temporarily prevent a Windows Update from reinstalling in Windows 10</a></li></ul>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-22648880102189363742017-11-27T14:24:00.001+00:002017-11-27T14:31:40.180+00:00Elements of the Archimate 3.0 Application Layer<p>This post is a basic <em>aide-memoire</em> for me to remember the characteristics of the Archimate Application Layer model elements. For a detailed description refer to the main <a href="http://pubs.opengroup.org/architecture/archimate3-doc/chap09.html">Archimate 3.0 documentation</a>.</p><p>The model elements are divided into 3 categories:</p><ul><li>Active Structure</li><li>Passive Structure</li><li>Behaviour</li></ul><p><br></p><h2>Active Structure</h2><table border="1" cellspacing="0" cellpadding="0"><tbody><tr><td width="94" valign="top"><p><strong>Element</strong></p></td><td width="212" valign="top"><p><strong>Description</strong></p></td><td width="147" valign="top"><p><strong>Notes</strong></p></td><td width="148" valign="top"><p><strong>Notation</strong></p></td></tr><tr><td width="94" valign="top"><p><strong>Component</strong></p></td><td width="212" valign="top"><p>Encapsulation of application functionality aligned to implementation structure<p>Modular and replaceable<p>Encapsulates behaviour and data<p>Exposes services and makes them available through interfaces</p></td><td width="147" valign="top"><p>A self-contained unit<p>Independently deployable, re-usable, and replaceable<p>Performs one or more application <i>functions</i><p>Functionality is only accessible through application <i>interfaces</i></p></td><td width="148" valign="top"><a href="https://lh3.googleusercontent.com/-DnpwUvf7Sls/Whwhn-4_m-I/AAAAAAAADf4/lPE3PFd3VTET1dm0uP0KHxkyUmnslIpQwCHMYCw/s1600-h/image%255B43%255D"><img width="156" height="197" title="image" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh97ZkrTOmKMysas9aIAzzQajQTpPuUUdL-jaK3Ni7y6rRxTLvUqAyccdOI08YkgOGcE2mh6kmNUAeyJYJRTKSuEW1w8ZiLZYv8LuPFQUKowflnkjXlPU4bX3lW38PGj6j9_ZMVFI8JWt8/?imgmax=800" border="0"></a></td></tr><tr><td width="94" valign="top"><p><strong>Collaboration</strong></p></td><td width="212" valign="top"><p>An aggregate of two or more application components that work together to perform collective application behaviour</p></td><td width="147" valign="top"><p>Specifies which components cooperate to perform some task<p>A logical or temporary collaboration of application components<p>Does not exist as a separate entity</p></td><td width="148" valign="top"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjk05-m8JAPoAeVTuxU-irfVoRd-IYE03gUz8YpQK2WBPuEUuqbzYaJvbjZlSxDjLIinLgtlJmHj84vq38h04g5gs5EP-RBhhdp7k6nNy8NydshbUYbefbN6aTTu1aWnSp-ieGz4vv3Rz0/s1600-h/image%255B44%255D"><img width="168" height="188" title="image" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-RGRi7c8nt3c/WhwhqGBvM2I/AAAAAAAADgE/XIZQjvlLof012ci6gteicEetJ9d6HOMqACHMYCw/image_thumb%255B17%255D?imgmax=800" border="0"></a></td></tr><tr><td width="94" valign="top"><p><strong>Interface</strong></p></td><td width="212" valign="top"><p>A point of access where application services are made available</p></td><td width="147" valign="top"><p>How the functionality of a component can be accessed<p>Exposes application services<p>The same interface may expose multiple services</p></td><td width="148" valign="top"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuv6HZkIsU7hi-QuASpT6ThWz1Zl_ihUzwpqkLVkIqm7snDGj-CMoaGPx5CKlNKZEVx4mcJJ_Zl1pOZSfA_KEYXh0Aop3cDPule_WcNE5Am4TNw-ELqTb_Qx807O7ziql3K87jyU9Gpc8/s1600-h/image%255B45%255D"><img width="160" height="198" title="image" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-37MEy491gOU/WhwhrafoZ-I/AAAAAAAADgM/PSfxRcsrwNAFwLLaPYpPr8z0HWWhbdspwCHMYCw/image_thumb%255B18%255D?imgmax=800" border="0"></a></td></tr></tbody></table><p><br></p><h2>Behaviour</h2><table border="1" cellspacing="0" cellpadding="0"><tbody><tr><td width="94" valign="top"><p><strong>Element</strong></p></td><td width="212" valign="top"><p><strong>Description</strong></p></td><td width="147" valign="top"><p><strong>Notes</strong></p></td><td width="148" valign="top"><p><strong>Notation</strong></p></td></tr><tr><td width="94" valign="top"><p><strong>Function</strong></p></td><td width="212" valign="top"><p>Automated behaviour performed by a component</p></td><td width="147" valign="top"><p>Describes internal behaviour of a component<p>Functions are exposed externally through one or more services<p>May access data objects</p></td><td width="148" valign="top"><a href="https://lh3.googleusercontent.com/-BVDJEwDFX5E/Whwhr3tir1I/AAAAAAAADgQ/CSI6nNJ9-xwM3ebF6Zg8nf4Mlr9CSZD5ACHMYCw/s1600-h/image%255B46%255D"><img width="168" height="193" title="image" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-XrFSUFZy4Sw/Whwhsob9ULI/AAAAAAAADgU/TTfbZcppvJIvrmiIRL-CGVRGL7BqIhc7ACHMYCw/image_thumb%255B19%255D?imgmax=800" border="0"></a></td></tr><tr><td width="94" valign="top"><p><strong>Interaction</strong></p></td><td width="212" valign="top"><p>A unit of collective application behaviour performed by two or more application components</p></td><td width="147" valign="top"><p>Collective behaviour performed by components that participate in a collaboration</p></td><td width="148" valign="top"><a href="https://lh3.googleusercontent.com/-yUZAstq3DDI/WhwhtAoOysI/AAAAAAAADgY/ET-pd7dyIMgvmxa1HX9wNi-6tIV4GRoZgCHMYCw/s1600-h/image%255B47%255D"><img width="162" height="203" title="image" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_uNXWOWVQKyt1e6XJM4nEmk1dA_T8yjUYsmy1j25Iu8PQ61NFg-cN1tJvCTBgm5eD8Mj3fKa-nARrheyguYhvjp0CqgHKeEgJHHQTZTjTVxlAomypbmdpIPg0tZj9gXVF-PlOoCJ_YZY/?imgmax=800" border="0"></a></td></tr><tr><td width="94" valign="top"><p><strong>Process</strong></p></td><td width="212" valign="top"><p>A sequence of application behaviours that achieves a specific outcome</p></td><td width="147" valign="top"><p>The internal behaviour performed by a component to realize a set of services<p>May realize application services<p>May access data objects<p>A component may perform the process</p></td><td width="148" valign="top"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKTzgFHa2XDmyvPzXSEyMnwj_VxiDW_rZ7CDwqZCdS9Ry136Pif5CPSt3DQt5RaAazV62KEL2Gqru7osq3951kOzKwl56tzRJc3krJzujnep3NtRnMfE5yWlukJiTYsyJErjShZd0QV2A/s1600-h/image%255B48%255D"><img width="163" height="201" title="image" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-3VyOGk3aCws/WhwhvFSTSWI/AAAAAAAADgk/HvOtiXrST681H0Y3LRxvAtwVo1Qnfv2lgCHMYCw/image_thumb%255B21%255D?imgmax=800" border="0"></a></td></tr><tr><td width="94" valign="top"><p><strong>Event</strong></p></td><td width="212" valign="top"><p>Denotes a state change</p></td><td width="147" valign="top"><p>Does not have a duration<p>May be internal or external<p>May have a time attribute</p></td><td width="148" valign="top"><a href="https://lh3.googleusercontent.com/--56fCeIj2vE/WhwhvpE43SI/AAAAAAAADgo/fdajmMlF9P4bJpV3SBWNdQ7wUyfRNa_3gCHMYCw/s1600-h/image%255B49%255D"><img width="157" height="188" title="image" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-iGhbchMsXg4/WhwhwAKGMRI/AAAAAAAADgs/uY93IT8Jmis62Ey-Zr2zED5wcPCdExy1ACHMYCw/image_thumb%255B22%255D?imgmax=800" border="0"></a></td></tr><tr><td width="94" valign="top"><p><strong>Service</strong></p></td><td width="212" valign="top"><p>An explicitly defined exposed application behaviour</p></td><td width="147" valign="top"><p>Functionality is exposed through interfaces<p>Realised by one or more functions<p>Provides a useful unit of behaviour</p></td><td width="148" valign="top"><a href="https://lh3.googleusercontent.com/-xNBvXxZ3aoU/WhwhwgEuM0I/AAAAAAAADgw/vQr0Xn3i2tYmKPjyXS4yV6gafsZXGdILwCHMYCw/s1600-h/image%255B50%255D"><img width="155" height="188" title="image" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-18lNJAAHV-c/WhwhxUQqueI/AAAAAAAADg0/NhEkZiT6HvwWqBEItCwBvsmWsQSXPrtSACHMYCw/image_thumb%255B23%255D?imgmax=800" border="0"></a></td></tr></tbody></table><p><br></p><h2>Passive Structure</h2><table border="1" cellspacing="0" cellpadding="0"><tbody><tr><td width="94" valign="top"><p><strong>Element</strong></p></td><td width="212" valign="top"><p><strong>Description</strong></p></td><td width="147" valign="top"><p><strong>Notes</strong></p></td><td width="148" valign="top"><p><strong>Notation</strong></p></td></tr><tr><td width="94" valign="top"><p><strong>Data</strong></p></td><td width="212" valign="top"><p>Data structured for automated processing</p></td><td width="147" valign="top"><p>A self-contained piece of information<p>Clear business meaning</p></td><td width="148" valign="top"><a href="https://lh3.googleusercontent.com/-iAdajFcUuR4/Whwhx3nlHAI/AAAAAAAADg4/7B6ig18sMHc3Ll_zWQHL14AYOtV5NiJVQCHMYCw/s1600-h/image%255B51%255D"><img width="150" height="113" title="image" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGnpOpmzZsBffjDR5WJ78Vc35qu4DSo1ZXez5bGuCNlBhs9URyHmhgNyOHA1nFhYA-K0RqZJHdzVPlidztzagaUEsLJemOUQc0SEz1PkvsaqsmBGh1lZqobNmP6_8M3QWxUrnpsX2Fwjk/?imgmax=800" border="0"></a></td></tr></tbody></table>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-11169784894986490352017-11-12T17:11:00.001+00:002017-11-12T17:11:44.531+00:00Speed up real-time rendering in DaVinci Resolve with Optimised Media<p>This post refers to <a href="https://www.blackmagicdesign.com/uk/products/davinciresolve/">DaVinci Resolve</a> 12.5.</p><p>Real-time rendering in DaVinci Resolve can be slow. One of the first things to try is optimising your media. As I understand it, Resolve will use optimised media – which is more efficient - as a proxy for the original format media in the timeline. </p><p>To optimise a media item you can right-click on it in the media gallery and select <em>Generate Optimised Media</em>.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXR6VGU6NFrwGE7DLLpf2b20fk4eK1xSYlmKbzxXwBczK7P5qLcgeepGiSbMixTQVWXeoEZehaOMsBBdDg9nmDzsEu3gMkE-3Lmj8dQliZqY8mCgMifJdjesX-htKM32TEt8sOUhskhEA/s1600-h/SNAGHTML613cfb16%255B5%255D"><img width="785" height="506" title="SNAGHTML613cfb16" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML613cfb16" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbIHMyUzcmCxlUxqILTCBKe1LOPKVRRk8Tyd1QU-wAkck0K5mONRwQQWQ7y4ZWSHHlSnzw3bE15RG3uNmHSzyOA8aIy5t4ojWOliPlStS_HJU7zNUj-6-F4qXMS2aosfXFEd2451uqdtA/?imgmax=800" border="0"></a></p><p>The output will end up in the same place as other cached items (see <a href="http://www.andyfrench.info/2017/11/speed-up-real-time-rendering-in-davinci.html">my previous post for details</a>).</p><p>Not surprisingly, generating optimised media can take quite a while and it appears that using this method you can only do one clip at a time.</p><p><a href="https://lh3.googleusercontent.com/-4zzGyUCmExw/WgiAzLbl09I/AAAAAAAADd8/9eysVoBVoygZP6VqxBMMAuJDAqk2T2WIACHMYCw/s1600-h/SNAGHTML614025e1%255B5%255D"><img width="642" height="370" title="SNAGHTML614025e1" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML614025e1" src="https://lh3.googleusercontent.com/-yvhdVKoOp1s/WgiAz69HZiI/AAAAAAAADeA/pPRTSQnWZWcqkPGUEUCB9L8Anx1tI38fACHMYCw/SNAGHTML614025e1_thumb%255B2%255D?imgmax=800" border="0"></a></p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-76027687329693291762017-11-12T15:29:00.001+00:002017-11-12T17:12:24.591+00:00Speed up real-time rendering in DaVinci Resolve<p>This post refers to <a href="https://www.blackmagicdesign.com/uk/products/davinciresolve/">DaVinci Resolve</a> 12.5.</p><h2>Problem</h2><p>OK, I don’t have a system that’s setup for video editing. As a result, playback in DaVinci Resolve is slow once I’ve added more than a few clips to the timeline, especially if I’m layering clips.</p><p>How can we speed up DaVinci Resolve so we get smooth real-time playback in the editor?</p><h2>Solution</h2><p>A solution might be to use the render cache in DaVinci Resolve. You can activate the render cache via the <em>Playback > Render Cache</em> menu item. </p><p><a href="https://lh3.googleusercontent.com/-LzUxy9TXuaM/WghoqUGhfLI/AAAAAAAADck/J_164fODWmcgePWZe1-GmPAsZLG6CxNHwCHMYCw/s1600-h/SNAGHTML60f70665%255B5%255D"><img width="638" height="455" title="SNAGHTML60f70665" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML60f70665" src="https://lh3.googleusercontent.com/--CDyps198u8/WghorNKAs4I/AAAAAAAADco/DXiOZPMpBvgbgK0DdJkb81Zu5DwuMzK7QCHMYCw/SNAGHTML60f70665_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>Note that the render cache has 2 modes: <em>Smart</em> and <em>User</em>. In short, using the <em>Smart</em> option allows Resolve to automatically cache items as it sees fit. In <em>User</em> mode clips are only cached when you indicate you want them cached. To do so, right-click on the clip and choose Render Cache Clip Source > On from the pop-up menu.</p><p><a href="https://lh3.googleusercontent.com/-A5WEinCfuOk/WghosCwHAeI/AAAAAAAADcs/K_7HrbBxGlk7TFgonbpYJPXgGeuwTF64ACHMYCw/s1600-h/SNAGHTML60fe0aa9%255B5%255D"><img width="816" height="530" title="SNAGHTML60fe0aa9" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML60fe0aa9" src="https://lh3.googleusercontent.com/-IhbCJRleHy8/WghotymUYMI/AAAAAAAADcw/pdT4MJv3Xz4NknzgtGvNoDN9ZrW1oIbqgCHMYCw/SNAGHTML60fe0aa9_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>When you first add an item to be cached a red line will appear at the top of the timeline above the clip. Once a clip is cached you’ll see the line turn blue.</p><p><a href="https://lh3.googleusercontent.com/-bYsnnOi819k/Wghouw9l2OI/AAAAAAAADc0/BidmGIZNpwgmzZpanid_zwsdvNCTWYa7gCHMYCw/s1600-h/SNAGHTML61013dd0%255B5%255D"><img width="874" height="405" title="SNAGHTML61013dd0" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML61013dd0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3dpk4YKglchH8kBBMQxaDU4AIROh7gcpD9x1Cr5UagvTR854DJI0O_ZAP2BDR7YlPL_d5ihHeAeGQJAn9ZjjMYRCV1knAgTmTAJKVKvf_vCTbd2h79FmnnChwxDV16RBV8hcW8HSK9l0/?imgmax=800" border="0"></a></p><p><strong>Tip:</strong> If you use the Smart mode and you already have a number of clips added to the timeline it might take a while to add to the cache. Your system might be slow for a while as the cache is built.</p><h2>Changing the render cache location </h2><p>Firstly, add the path you want to use to the Media Storage section of <em>DaVinci Resolve > Preferences…</em> screen. I had to restart Resolve after adding a new location or the next step failed.</p><p><a href="https://lh3.googleusercontent.com/--VZ3NPEwOSI/WghxkRpJqeI/AAAAAAAADdM/_N4uuRJUM8EpAqikcO-oo83B2ireJNRSwCHMYCw/s1600-h/SNAGHTML61242c80%255B5%255D"><img width="378" height="239" title="SNAGHTML61242c80" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML61242c80" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_vGHPw2C0JmedV_82FPqKU26yYG2g14DgX6EhmbKTvJyNqFypMfZrnBIgW_ZJoGna1Qc5PWOK8aCx_G5pl5Ze3Vwa1V4mLu5gkc1rRHoMQXb7gYXF1WworM8rd2bHLxQrJ0DtdIvhyphenhyphenGU/?imgmax=800" border="0"></a></p><p><a href="https://lh3.googleusercontent.com/-xWHXuG8cCJ0/WghxloquR1I/AAAAAAAADdU/lfjmeSPk_kMD9_PVbuojXExGGBFq4_jhgCHMYCw/s1600-h/image%255B4%255D"><img width="757" height="610" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-ZD6Xlv2EyvE/WghxmfZuC_I/AAAAAAAADdY/DYd0kEHnsB4AxTA5-fpbAmrNKDYAyGS0gCHMYCw/image_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>Secondly, open <em>File > Project settings…</em> and go to General Options. The cache file location is at the bottom of the screen.</p><p><a href="https://lh3.googleusercontent.com/-YtwTflfJ4f8/WghxnQHgHXI/AAAAAAAADdc/D53WsVnHH10Py99PJBT-xZGPGRlGPLbGACHMYCw/s1600-h/image%255B9%255D"><img width="850" height="810" title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="image" src="https://lh3.googleusercontent.com/-pemrgbYQOj0/WghxoXfuwGI/AAAAAAAADdg/tFfDgH1wkJwDtcKjRn5oj9vqskOAzaDmQCHMYCw/image_thumb%255B5%255D?imgmax=800" border="0"></a></p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-37022365547988180572017-08-13T14:22:00.001+01:002017-08-13T14:22:26.063+01:00Quick RabbitMq using Docker<p>I needed a quick <a href="https://www.rabbitmq.com/">RabbitMq</a> installation so I could play with <a href="http://masstransit-project.com/">MassTransit</a>, the free open-source .Net message bus framework. <a href="https://www.docker.com/">Docker</a> to the rescue. I was using the <a href="https://www.docker.com/products/docker-toolbox">Docker Toolbox</a> on Windows 10 Home.</p><p>First things first - RabbitMq is available on the <a href="https://store.docker.com/images/rabbitmq">Docker Store</a> as a Docker image called rabbitmq. The documentation is reasonable and I decided I wanted RabbitMq installed with the management plugin. For this test I decided to leave the default RabbitMq username (guest) and password in place. I also elected to expose the default ports for the management plugin (15672) and the standard RabbitMq port (5672) to the host.</p><p>Having scanned the documentation the following Docker run command would seem to be in order:</p><pre class="brush: text">docker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management</pre><p>Note the two –p command line arguments exposing the ports to the host from the Docker container. So, time to crack open the Docker Quickstart Terminal and run the command.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1KJXmW9C8PKQCV5RLn2YhRIL_z2Oc5Hci-8_4q6UOw75NlD0BJYug-ZclPz8gX1Eysa_qWJu41Y9S5GVpMoBs5WMORqQ4rN08OtlpxCgiuH1DEQp6ow5Xv80R6RV4PI8M-46pIpYEdhs/s1600-h/SNAGHTMLf7f8e94%255B5%255D"><img width="704" height="210" title="SNAGHTMLf7f8e94" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTMLf7f8e94" src="https://lh3.googleusercontent.com/-9huAyQf_REA/WZBSaai7M8I/AAAAAAAADaM/lCdQHSGYxkUqhD4G8Er0QvwMWlU_6pmLwCHMYCw/SNAGHTMLf7f8e94_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>With the Docker container up-and-running I can now go to my local machine and access the RabbitMq management UI running on port 15672.</p><p><a href="https://lh3.googleusercontent.com/-5BdS-mEzSiI/WZBSbI0YvJI/AAAAAAAADaQ/-BFnHuz9J28OECtCG-SktG3WmKZ60XMggCHMYCw/s1600-h/SNAGHTMLf80a65d%255B7%255D"><img width="1045" height="878" title="SNAGHTMLf80a65d" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTMLf80a65d" src="https://lh3.googleusercontent.com/-gXuQJFJXy_k/WZBScIMIttI/AAAAAAAADaU/fx0kxnhtaGQ0UNBbRmqBrd2EgEJSeLowwCHMYCw/SNAGHTMLf80a65d_thumb%255B4%255D?imgmax=800" border="0"></a></p><p>Cool! Looks like we’ve got RabbitMq running in a container.</p><h2>A quick test application</h2><p>I decided to run a quick test using a console application to check everything was working. Firstly, I set up a virtual host for the test in the management UI remembering to add the guest user to the virtual host.</p><p><a href="https://lh3.googleusercontent.com/-YKYoPJ3iyks/WZBSc2OUMeI/AAAAAAAADaY/aSd74v-JKc0nz7JJR_OS-Z3sRek-2wvfgCHMYCw/s1600-h/SNAGHTMLf83a6cc%255B7%255D"><img width="794" height="549" title="SNAGHTMLf83a6cc" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTMLf83a6cc" src="https://lh3.googleusercontent.com/-uya3k2ZO-68/WZBSdkTm0dI/AAAAAAAADac/jI2ZOvW0AEgBqagSt3W9iZ2UAYPe0pFHwCHMYCw/SNAGHTMLf83a6cc_thumb%255B4%255D?imgmax=800" border="0"></a></p><p>Then, using the RabbitMq .Net client I created the ‘Hello World’ application.</p>
<pre class="brush: csharp;">using System;
using System.Text;
using RabbitMQ.Client;
namespace RabbitMqTest
{
class Program
{
static void Main(string[] args)
{
try
{
ConnectionFactory factory = new ConnectionFactory();
factory.Uri = new Uri("amqp://guest:guest@192.168.99.100:5672/console-test");
Console.WriteLine("Connecting...");
IConnection conn = factory.CreateConnection();
Console.WriteLine("Connected.");
IModel model = conn.CreateModel();
var exchangeName = "console-test-exchange";
var queueName = "console-test-queue";
var consoleTestRoutingKey = "console-test-routing-key";
model.ExchangeDeclare(exchangeName, ExchangeType.Direct);
model.QueueDeclare(queueName, false, false, false, null);
model.QueueBind(queueName, exchangeName, consoleTestRoutingKey, null);
byte[] messageBodyBytes = Encoding.UTF8.GetBytes("Hello, world!");
model.BasicPublish(exchangeName, consoleTestRoutingKey, null, messageBodyBytes);
}
catch (Exception e)
{
Console.WriteLine(e);
}
}
}
}</pre>
<p>I ran the application and headed back into the RabbitMq management UI to check the results. Firstly, was the exchange created?</p><p><a href="https://lh3.googleusercontent.com/-m5wiYVb_7YM/WZBSefT-UrI/AAAAAAAADag/y7A3rXoZdbc-PYulYU6zsGj6P6xZFkboQCHMYCw/s1600-h/SNAGHTMLf8b041c%255B6%255D"><img width="763" height="855" title="SNAGHTMLf8b041c" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTMLf8b041c" src="https://lh3.googleusercontent.com/-I1pl-tmF13Y/WZBSfcLHUEI/AAAAAAAADak/08JHXvbchvM79vR-Fq-dV5e_wks1SKDSwCHMYCw/SNAGHTMLf8b041c_thumb%255B3%255D?imgmax=800" border="0"></a></p><p>It was. And the queue?</p><p><a href="https://lh3.googleusercontent.com/-Uh2bdBuUf2k/WZBSgDGQYNI/AAAAAAAADao/ckF9a83cDw41AKAXS_LZRJbmoX_uO24BQCHMYCw/s1600-h/SNAGHTMLf8c5e0f%255B5%255D"><img width="899" height="765" title="SNAGHTMLf8c5e0f" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTMLf8c5e0f" src="https://lh3.googleusercontent.com/-w9WALD7K5RM/WZBSgmbczkI/AAAAAAAADas/Uc5Avjn3WdAtjMynkDcGI9H9YXhLbw4EwCHMYCw/SNAGHTMLf8c5e0f_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>Success again. You can see there’s one message ready too. By drilling in to the queue you can get messages in the management UI. Let’s see what we got.</p><p><a href="https://lh3.googleusercontent.com/-ViznqihlTGA/WZBShfprwcI/AAAAAAAADaw/4Bnj528FJ8EX7N3Bg8Y_OYvV6Pqlgv7AACHMYCw/s1600-h/SNAGHTMLf8dd03d%255B5%255D"><img width="571" height="566" title="SNAGHTMLf8dd03d" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTMLf8dd03d" src="https://lh3.googleusercontent.com/-yNeoJctYAGs/WZBSiENtjII/AAAAAAAADa0/Spg64YvzjJAs_HioopWo8B9w26DpDo2jQCHMYCw/SNAGHTMLf8dd03d_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>Bingo! So, all is well with RabbitMq.</p><h2>The Sample-ShoppingWeb application</h2><p>MassTransit has a sample application called Sample-ShoppingWeb which you can get from <a href="https://github.com/MassTransit/Sample-ShoppingWeb">GitHub</a>.</p><p>Firstly, I created a new virtual host in RabbitMq and added the guest user to it. I then made simple changes to the configuration to change the RabbitMqHost setting in the App.config and Web.config files of the TrackingService and Shopping.Web projects respectively.</p><p><a href="https://lh3.googleusercontent.com/-1BZ2gK4Qy3g/WZBSi9ZH_hI/AAAAAAAADa4/mm2eomN8F_YehPFyazpZ3qUoISjvBmo0wCHMYCw/s1600-h/SNAGHTML1002618c%255B5%255D"><img width="752" height="182" title="SNAGHTML1002618c" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML1002618c" src="https://lh3.googleusercontent.com/-NIRo8H_ZgbQ/WZBSjRa7yEI/AAAAAAAADa8/KrrJxettZRMGLizuaJ4-gogsv4PLvZeYACHMYCw/SNAGHTML1002618c_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>I ran the example and added a few items to the cart using the Shopping.Web MVC application. Watched as the TrackingService picked up the items via RabbitMq. An examination of the RabbitMq management UI showed a bunch of new exchanges and two new queues.</p><p>Done. </p><p><a href="https://lh3.googleusercontent.com/-imWlLuf0woY/WZBSkD68lOI/AAAAAAAADbA/49B8Nv_Gubol7x1OgCkFMphvds9wb7GtgCHMYCw/s1600-h/SNAGHTMLb7b63665"><br></a></p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-24147656707646975152017-08-13T11:44:00.001+01:002017-08-19T10:39:50.168+01:00Docker recipes<p>This post is a quick aide-mémoire for basic command-line Docker operations. It’s well worth reading the <a href="https://docs.docker.com/">Docker documentation</a> for the real deal. I’ve been running Docker on Windows 10 Home – yes, Home – so I’ve had to use <a href="https://www.docker.com/products/docker-toolbox">Docker Toolbox</a>. I’ve run the commands listed here using the Docker Quickstart Terminal that comes with the Toolbox.</p><h2>List containers</h2><p>To list the all containers run the following command:</p>
<pre class="brush: plain">docker container ls</pre>
<p><a href="https://lh3.googleusercontent.com/-wjfF7X7cWEI/WZAtgJt9yHI/AAAAAAAADZs/w_HV8gcnoi0rFyn528dwNC2pwVFecgezwCHMYCw/s1600-h/SNAGHTMLf550b66%255B5%255D"><img width="704" height="268" title="SNAGHTMLf550b66" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTMLf550b66" src="https://lh3.googleusercontent.com/-FkET1MXO2tg/WZAtg6BQgkI/AAAAAAAADZw/NyGO9CbgFUomrHN6T5ypHJyKposCAMVrwCHMYCw/SNAGHTMLf550b66_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>See the <a href="https://docs.docker.com/engine/reference/commandline/container_ls/">docker container ls documentation</a>. </p><h2>Run a Bash shell on a container</h2><p>To get access to a container using a Bash shell run the following command:</p>
<pre class="brush: plain;">docker exec –it <container-name-here> bash</pre>
<p><a href="https://lh3.googleusercontent.com/-IkKX_Vh-rww/WZAthqf-iGI/AAAAAAAADZ0/fJ9RthT8oQAVA7Va9wAEB07uB1RAmBIMACHMYCw/s1600-h/SNAGHTMLf58ed4c%255B5%255D"><img width="704" height="185" title="SNAGHTMLf58ed4c" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTMLf58ed4c" src="https://lh3.googleusercontent.com/-IeIb_aTFzUM/WZAtiQ9QOJI/AAAAAAAADZ4/gOjDYSW8laYRiGNV6SmE7xMpvL-_msbOACHMYCw/SNAGHTMLf58ed4c_thumb%255B2%255D?imgmax=800" border="0"></a></p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-29583662802325409502017-07-21T11:59:00.001+01:002017-07-21T11:59:05.831+01:00Setting up SSH for BitBucket on Windows<p>Back to basics for me today. I’m rebuilding a machine and want to setup SSH to access my <a href="https://bitbucket.org">BitBucket</a> account (I use BitBucket for my Git repositories). The new machine already has Git installed. I simply used <a href="https://chocolatey.org/">Chocolatey</a> to install <a href="https://chocolatey.org/packages/git">Git</a>.</p><p>You can skip to <a href="https://confluence.atlassian.com/bitbucket/set-up-ssh-for-git-728138079.html">full instructions in the BitBucket help</a> if you like. </p><h2>Step 1 – Check the .ssh directory</h2><p>The first step is to check that you’ve got an folder called .ssh in your home directory. If it’s missing you need to create it.</p><p><a href="https://lh3.googleusercontent.com/-dk3A6lc5zgA/WXHeVbC3j-I/AAAAAAAADWg/FolOkxAXlAcT7cSTvdabvw7HERY8vlPNwCHMYCw/s1600-h/SNAGHTML27c673c3%255B6%255D"><img width="615" height="496" title="SNAGHTML27c673c3" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML27c673c3" src="https://lh3.googleusercontent.com/-BIjU1-7Tpsk/WXHeWb8RMHI/AAAAAAAADWk/qE30_odcgX8SXbXbTOeny8DfH2MrZOehwCHMYCw/SNAGHTML27c673c3_thumb%255B3%255D?imgmax=800" border="0"></a></p><h2>Step 2 – Create the default identity</h2><p>Run ssh-keygen to create the key. If this is a fresh install there won’t be a default key so you can just hit enter to accept the default name or ender a new one if you want. Enter the passphrase when prompted.</p><p><a href="https://lh3.googleusercontent.com/-Yp7bc6Ybf24/WXHeWpR_foI/AAAAAAAADWo/k21hUH0uns0Qxy41p3ODvdeEldGl-6dLQCHMYCw/s1600-h/SNAGHTML27c76c2d%255B6%255D"><img width="610" height="714" title="SNAGHTML27c76c2d" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML27c76c2d" src="https://lh3.googleusercontent.com/-P7KxWE5Ileo/WXHeXAMB9xI/AAAAAAAADWs/m6n7hrvwwFMlE1J7Wv2wocA4FNeo38YHwCHMYCw/SNAGHTML27c76c2d_thumb%255B3%255D?imgmax=800" border="0"></a></p><h2>Step 3 – Create an SSH config file</h2><p>Create a config file for SSH.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqUlpRK8KfKFDcq6CJD7UNDwHQKMQqs7AKuunmHJ_O2H1r5XUvnldNXOcUJjF3lmLMHUvbgsi3e1O6M1EEYpJvuZ3RePgbXvZhCY4KDUjHPi2abjerKnabDW6-Y-U4eAHp-4o9JWxtaMM/s1600-h/SNAGHTML27c81a4f%255B5%255D"><img width="528" height="451" title="SNAGHTML27c81a4f" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML27c81a4f" src="https://lh3.googleusercontent.com/-mbHw8xcy3mI/WXHeYZVJXNI/AAAAAAAADW0/KJuCWl8mH98v_zo8m1OLbjWRVJgRvA77gCHMYCw/SNAGHTML27c81a4f_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>Open the file and edit it. Add the following:</p>
<pre class="brush: text;">Host bitbucket.org
IdentityFile ~/.ssh/id_rsa
</pre>
<p><br></p><h2>Step 4 – Update the .bashrc file</h2><p>Check you’ve got a .bashrc file in your home directory. Create one if you don’t. Open the .bashrc file and edit it. Add the following:</p>
<pre class="brush: text;">#! /bin/bash
eval `ssh-agent -s`
ssh-add ~/.ssh/*_rsa
</pre>
<p>See also <a href="http://www.andyfrench.info/2016/06/enter-rsa-passphrase-once-when-using.html">Enter RSA passphrase once when using Git bash</a>.</p><p>Close and reopen GitBash. You’ll be prompted for the passphrase.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfo6tZsAorNFN1tvWvuzEDfSpewrp5E1YJs0fDkUqu1zKgT5gcFiUTsAcF0ZJzexUtMSmoertejaZJ1icV3bv5Igj8Ra2KpnxIY7MkgY2wbmxvTNVGD1zsEBFcrrA_jJ0HLWXeL2lEJ0c/s1600-h/SNAGHTML27c90088%255B5%255D"><img width="629" height="263" title="SNAGHTML27c90088" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML27c90088" src="https://lh3.googleusercontent.com/-buSE-gvuop4/WXHeZYhDPgI/AAAAAAAADW8/RCxVo-ZoXKk2Doveogypaq7iIQBqJ-ddwCHMYCw/SNAGHTML27c90088_thumb%255B2%255D?imgmax=800" border="0"></a></p><h2>Step 5 - Configure BitBucket to use the new key</h2><p>Go to your BitBucket account and navigate to your settings. Adding the key is easy. <a href="https://confluence.atlassian.com/bitbucket/set-up-ssh-for-git-728138079.html">Follow the steps here</a>.</p><p><a href="https://lh3.googleusercontent.com/-D0NOstjIXXk/WXHeZopxKJI/AAAAAAAADXA/UMUmqEbo1OUegqZ9uo_mrhHqg30qtZGXACHMYCw/s1600-h/SNAGHTML27cc8153%255B5%255D"><img width="640" height="383" title="SNAGHTML27cc8153" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML27cc8153" src="https://lh3.googleusercontent.com/-GC04B4tndFc/WXHeaajrvhI/AAAAAAAADXE/5lP0356CXXwgiVwExEIXlMl3L9JH2DEhACHMYCw/SNAGHTML27cc8153_thumb%255B2%255D?imgmax=800" border="0"></a></p><p><a href="https://lh3.googleusercontent.com/-KJ2ONFMSGJw/WXHeawJMfkI/AAAAAAAADXI/nt6IvLYWVHwVhT3QQg0qs0xsVBzLuGzbACHMYCw/s1600-h/SNAGHTML27ce99f2%255B5%255D"><img width="640" height="412" title="SNAGHTML27ce99f2" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML27ce99f2" src="https://lh3.googleusercontent.com/-bdmxgOCcpEo/WXHebeWHvLI/AAAAAAAADXM/dnCS1zG0jbIdjiUpifhTpXLWWivRLZOuQCHMYCw/SNAGHTML27ce99f2_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>Check you can access BitBucket using the new key.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhc457qA97v88QwHgzQOtI0746wpx8_HOZS-Fo9EFojwb7XcLu7k-QkCE1UXCqNlAKWY3dkt-x8hLWW3scjGAqSpMYnGXhkleqExNKsaFz80uBNAyccy3Ai48ptlGWHo17vnAT991OI6A/s1600-h/SNAGHTML27cf57a4%255B5%255D"><img width="641" height="299" title="SNAGHTML27cf57a4" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" alt="SNAGHTML27cf57a4" src="https://lh3.googleusercontent.com/-b2OGCLl-8Wk/WXHeeTYj69I/AAAAAAAADXU/3hGVOQ2WXJk_pPdEVfiUxszWpocCi_AIACHMYCw/SNAGHTML27cf57a4_thumb%255B2%255D?imgmax=800" border="0"></a></p><p>Done.</p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-5858423300077676982017-01-26T14:36:00.001+00:002017-01-26T14:36:44.556+00:00Unscrambling a mess of .Net Core installations<p>I've had a variety of .Net Core installations on my laptop. The end result was .Net Core not working in Visual Studio 2015 anymore despite uninstalling and reinstalling .Net Core and associated Visual Studio tools. The problem was probably exacerbated by uninstallers not completing correctly and other sundary issues.</p> <p>In a perfect world I would have paved the machine and started again but at this point that’s not practical. Working in VMs might also be an option but for now I just want a laptop with .Net Core up-and-runing.</p> <h2>Investigation</h2> <p>Firstly I decided to find out what was hanging around on the machine. I opened a command prompt and ran <font face="Courier New">dotnet –version</font> to find out.</p> <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir_GBWlSbQjHJ8EeLIwIlnBNE6uhg_T5x-DgxSyadH5H0ec9aM66phOUa_EFf7_5mFeyZOlgenUDhHoCExahp9JQCzYcGeM_P2jNq6wKv1ZEl8IOq7VAZ85whEEnlIEsdPGt-W5_p-Q_Q/s1600-h/2017-01-26%25252010_47_42-MINGW64__c_source%252520%252528Admin%252529%25255B9%25255D.png"><img title="2017-01-26 10_47_42-MINGW64__c_source (Admin)" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="2017-01-26 10_47_42-MINGW64__c_source (Admin)" src="https://lh3.googleusercontent.com/-AciX0N0hS7Q/WIoJcITvl1I/AAAAAAAADJI/B-mVTa2vAYg/2017-01-26%25252010_47_42-MINGW64__c_source%252520%252528Admin%252529_thumb%25255B5%25255D.png?imgmax=800" width="600" height="529"></a></p> <p>So this is saying I’ve got version <font face="Courier New">1.0.0-beta-001598</font>. That seems very old to me and something that should have gone ages ago.</p> <p>Looking in Add/Remove Programs I see something different.</p> <p><a href="https://lh3.googleusercontent.com/-hI4xTso2PgY/WIoJcn3hJxI/AAAAAAAADJM/gcBJ6NGBKfk/s1600-h/2017-01-26%25252010_48_16-Control%252520Panel_Programs_Programs%252520and%252520Features%25255B6%25255D.png"><img title="2017-01-26 10_48_16-Control Panel_Programs_Programs and Features" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="2017-01-26 10_48_16-Control Panel_Programs_Programs and Features" src="https://lh3.googleusercontent.com/-0xIKBCDtTB8/WIoJdMwx5UI/AAAAAAAADJQ/VIOT8Pylpig/2017-01-26%25252010_48_16-Control%252520Panel_Programs_Programs%252520and%252520Features_thumb%25255B4%25255D.png?imgmax=800" width="1006" height="193"></a></p> <p>This suggests 2 other versions have been installed: 1.0.0 Preview2-003131 and 1.0.0 Preview2-003133. Hmm…</p> <p>So, I had a look at the PATH environment variable on the machine and saw some interesting things. The following directories – that all seem related to .Net Core – were listed in this order:</p> <ul> <li>c:\Program Files\dotnet\bin</li> <li>c:\Program Files\dotnet</li> <li>c:\Program Files\Microsoft DNX\Dnvm</li> <li>c:\Users\username\.dnx\runtimes\dnx-coreclr-win-x86.1.0.0-rc1-update2\bin</li> <li>c:\Users\username\.dnx\bin</li></ul> <p> </p> <p>Looking in the <font face="Courier New">c:\Program Files\dotnet</font> folder confirmed the installed SDKs but there was something odd. The dotnet.exe appeared in <font face="Courier New">c:\Program Files\dotnet</font> <em>and</em> in <font face="Courier New">c:\Program Files\dotnet\bin</font>. Running them from each location separately revealled different .Net Core versions.</p> <p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3ZPfdN6iRXXjXnODXrw7iSLgSAs_NhGqQndP-EkIGY5TVjp83xlsk-BXkQXzIFgAvXTNrr8VFBSEyNvFrahH9cjoWex7RzcAY8ukjwbkESBIbuy0_8-6ziUQce3bX0tAe7ZNUejnXnH0/s1600-h/2017-01-26%25252011_01_39-cmd%252520%252528Admin%252529%25255B10%25255D.png"><img title="2017-01-26 11_01_39-cmd (Admin)" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="2017-01-26 11_01_39-cmd (Admin)" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLFf9Q7kG_bQ1fus8bI6eSIOzhx8jBC1e2H3O1CYaM2bj5eZMfPlv_igzy3ZL5j1W1Sy3o-YUlw3KKo4ZNYvPiK3iF_nltJH_O6VM4JM5H_mGYA5mWn2QKYSdpykrf5iq4EcjyUnOMp9k/?imgmax=800" width="692" height="659"></a></p> <p>Yep. Things are in a mess. </p> <p>The <font face="Courier New">c:\Users\username\.dnx\</font> diectory is also interesting. That isn’t used anymore. What’s in there I wonder?</p> <p><a href="https://lh3.googleusercontent.com/-j3hSDwnd2BY/WIoJeRcdCHI/AAAAAAAADJc/3QbN9AVGlJk/s1600-h/2017-01-26%25252010_53_03-cmd%252520%252528Admin%252529%25255B4%25255D.png"><img title="2017-01-26 10_53_03-cmd (Admin)" style="border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px" border="0" alt="2017-01-26 10_53_03-cmd (Admin)" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8NKJ7whJu014RnMvj-fUxwPoWOQk9ndGkrRlmVE40TZwHBm0Ba1r7M6DYcic8H3Fa26ZLVK74oA9bp-BjsA5R_mv0r36ZjpLNXaY8HBVJFj4Rzh91T6d2m6qw1L9hEtZP7Jx2nabOT74/?imgmax=800" width="789" height="365"></a></p> <p>Turns out it’s got the old – now redundant - dnvm application that gives a different version again. There’s a stack of other stuff in theer too. Sheesh, what a mess I’ve made.</p> <h2>Solution </h2> <p>My solution was to do the following:</p> <ul> <li>Uninstall the .Net Core and VS Tooling using Add/Remove Programs.</li> <li>Run dnvm uninstall from the old .dnx folder (probably not necessary but what the heck).</li> <li>Manually delete the following directories (and contents):</li> <ul> <li>c:\Program Files\dotnet</li> <li>c:\Program Files\Microsoft DNX</li> <li>c:\Users\username\.dnx</li></ul> <li>Remove the same directories as above (and/or sub directories) from both the user and environment $PATH.</li> <li>Restart the machine (probably not necessary - belt and braces).</li> <li>Installed the latest .Net Core SDK and Visual Studio 2015 Tools from <a href="https://www.microsoft.com/net/download/core">here</a>.</li></ul> <p> </p> <p>The result? Working .Net Core in Visual Studio! </p> <p>There was a nice tidy c:\Program Files\dotnet folder with the dotnet.exe and a subfolder containing the SDKs. That’s it, no other folders required or present.</p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-85183992725567093672017-01-12T17:58:00.001+00:002017-01-12T17:58:35.940+00:00What is Enterprise Architecture as described by TOGAF 9.1?<p> </p> <h2>What is Architecture?</h2> <p>The TOGAF documentation initially refers to the original version of ISO/IEC 42010:2007 (<em>Systems and software engineering</em>) which defines architecture in the following terms:</p> <blockquote> <p>“The fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution.” [1] [2]</p></blockquote> <p>However, TOGAF defines an architecture as follows:</p> <blockquote> <p>“1. A formal description of a system, or a detailed plan of the system at component level to guide its implementation <br>2. The structure of components, their inter-relationships, and the principles and guidelines governing their design and evolution over time” [2] [3]</p></blockquote> <h2>What is an Enterprise?</h2> <p>An enterprise is defined as follows:</p> <blockquote> <p>“The highest level (typically) of description of an organization and typically covers all missions and functions. An enterprise will often span multiple organizations.” [4]</p></blockquote> <h2>What kinds of architecture are dealt with by TOGAF?</h2> <p>TOGAF deals with 4 kinds of architecture:</p> <ul> <li><strong>Business Architecture</strong></li> <ul> <li>defines the business strategy, governance, organization, and key business processes [2]</li> <li>a description of the structure and interaction between the business strategy, organization, functions, business processes, and information needs [5]</li></ul> <li><strong>Data Architecture</strong></li> <ul> <li>describes the structure of an organization's logical and physical data assets and data management resources [2]</li> <li>a description of the structure and interaction of the enterprise's major types and sources of data, logical data assets, physical data assets, and data management resources [6]</li></ul> <li><strong>Application Architecture</strong></li> <ul> <li>provides a blueprint for the individual applications to be deployed, their interactions, and their relationships to the core business processes of the organization [2]</li> <li>a description of the structure and interaction of the applications as groups of capabilities that provide key business functions and manage the data assets [7]</li></ul> <li><strong>Technology Architecture</strong></li> <ul> <li>describes the logical software and hardware capabilities that are required to support the deployment of business, data, and application services [2]</li> <li>includes IT infrastructure, middleware, networks, communications, processing, standards, etc. [2]</li> <li>a description of the structure and interaction of the platform services, and logical and physical technology components [8]</li></ul></ul> <p> </p> <h2>References</h2> <ul> <li>[1] <a title="http://www.iso-architecture.org/ieee-1471/defining-architecture.html" href="http://www.iso-architecture.org/ieee-1471/defining-architecture.html">http://www.iso-architecture.org/ieee-1471/defining-architecture.html</a></li> <li>[2] <a title="http://pubs.opengroup.org/architecture/togaf9-doc/arch/" href="http://pubs.opengroup.org/architecture/togaf9-doc/arch/">http://pubs.opengroup.org/architecture/togaf9-doc/arch/</a></li> <li>[3] <a title="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_08" href="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_08">http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_08</a></li> <li>[4] <a title="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_34" href="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_34">http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_34</a></li> <li>[5] <a title="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_22" href="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_22">http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_22</a></li> <li>[6] <a title="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_32" href="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_32">http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_32</a></li> <li>[7] <a title="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_04" href="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_04">http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_04</a></li> <li>[8] <a title="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_73" href="http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_73">http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap03.html#tag_03_73</a></li></ul>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-81638584889315338632016-12-26T00:28:00.001+00:002016-12-26T16:52:07.365+00:00Amazon gift cards (how Amazon can take your money and give nothing in return)<h2>Customer service? I think not</h2> <p><font color="#ff0000"><strong>Final update (26/12/16 16:30): </strong>Following a Tweet from me Amazon’s Social Media Team got in touch and resolved the issue, finally. Well done them and thanks to the lady who sorted it out for me. Case closed. I’ve left the blog post below for information.</font></p> <p>My son received 2 gift cards from a good friend of our this Christmas. Both the cards were purchased at the same time direct from Amazon and were delivered direct to us. Both cards arrived at the same time in the same package. No 3rd party retailers were involved. </p> <p>My son was able to redeem the first card but the second one – with a value of £20 - simply wouldn’t work.</p> <p>So, I contact Amazon customer services using their online email facility but the results are shockingly poor and have left me feeling like I’m being taken for a ride. </p> <p>So far they have strung me along with an exchange of <strong>11 emails and one long phone call</strong>. I’ve included the whole thing below because of it’s absurdity. In short they start by saying the card was not authorised by the ‘retailer’ and that I should contact them. I point out <em>Amazon was the retailer</em> – no 3rd parties involved. They respond by asking for a PDF of the card which I send. They then ask me to provide the following gem of information:</p> <blockquote> <p>“If you're able to see the any 3 consecutive digits of the claim code, other than the first 2 or last 4, please reply to this e-mail with these numbers along with the 16 digit card number located on the back of the card.”</p></blockquote> <p>Remember, I’d just send them a PDF of the whole card. You have to assume I’m being strung along at this point.</p> <p>To rub salt in my wounds Amazon includes the following message at the bottom of each email:</p> <blockquote> <p>“Your feedback is helping us build Earth's Most Customer-Centric Company.” </p></blockquote> <p>That starts to look like a bit of a joke. Anyway, so far no I’ve not been able to resolve the issue. <font color="#666666">To cut a long story short, Amazon has taken money from a friend of mine and provided nothing in return. What a great way to make money!</font></p> <p>The whole email trail follows if you are interested (I’ve omitted the card number for obvious reasons).</p> <h2>The email trail</h2> <p><strong>25/12/16 13:25:20 from me</strong><br><br>A gift card given to my son does not work. It's for £20 and has number ****-******-*****. Please advise. </p> <p><strong>25/12/16 15:34 from Amazon Customer Services</strong></p> <p>Hello, </p> <p>I'm sorry to hear you've had trouble using the Gift Card you received for your son and I’ll be happy to help you today. </p> <p>I’ve checked your account, and can see that according to our records the Gift Card wasn't activated by the retailer it was purchased from. </p> <p>Unfortunately, we can't activate this Gift Card for you as it was not created on our system. </p> <p>The best action to take in this situation is to bring the Gift Card back to the point of purchase with your receipt, where the retailer can reissue a new card for you to use. If you no longer have the receipt, please contact the shop where the card was purchased to resolve this. </p> <p>Your patience and understanding is greatly appreciated. </p> <p>If you need any further information or assistance, please let us know by replying to this e-mail so that we'll be happy to help you further. <br>We look forward to seeing you again soon. </p> <p><strong><strong>25/12/16</strong> 17:43 from me</strong></p> <p>Sorry, but I think you are mistaken.</p> <p>Two gift cards arrived at the same time and were purchased by a friend direct from amazon.co.uk, not a 3rd party. The 2 cards arrived in the same package and came direct from you. One card worked, the second did not. I believe the mistake is at your end. </p> <p>Andrew French </p> <p><strong><strong>25/12/16</strong> 18:12 from Amazon Customer Services</strong></p> <p>Hello, </p> <p>I'm sorry to hear that you were unable to use one of your Gift Cards. </p> <p>We will be happy to take the necessary action. </p> <p>If you have received the Physical Gift card, I request you to attach the picture of the front and back side of Gift Card. </p> <p>Please attach the picture as a PDF, JPG or PNG file. </p> <p>If you have received the Email Gift card, I request you to just copy and past the entire gift card information, and send it to us. </p> <p>I am sorry in making you to write back to us, but this will help us in resolving this issue for you in an efficient manner. </p> <p>Thank you for you patience and understanding in this regard. </p> <p>We look forward to seeing you again soon. </p> <p><strong><strong>25/12/16</strong> 19:08 from me</strong></p> <p>Please find the PDF attached as requested. <br>[I attached the PDF to the email]</p> <p><strong><strong>25/12/16</strong> 20:54 from Amazon Customer Services</strong></p> <p>Hello Andy, </p> <p>Firstly, please accept my sincere apologies for any inconvenience caused by this situation. </p> <p>I understand the level of disappointment this has caused to you. If I had been in this situation, I would have felt the same. </p> <p>You have been a loyal and supportive customer with us since a long time, I highly appreciate your support with us. </p> <p>It's never our intention to cause inconvenience to our honest and valuable customer like you. </p> <p>Further to your email, I understand that the 2 Gift card were purchased by your friend but one Gift card is working another one is not working.</p> <p>In this situation to help you further, I have checked your friend account <a href="mailto:omitted@omitted.co.uk">omitted@omitted.co.uk</a> and see that he/she purchased only one gift card from our direct store and the order number for the one is #***-*******-*******. </p> <p>I have checked the image you have provided to us and can see that according to our records the Gift Card wasn't activated by the retailer it was purchased from. <br>Unfortunately, we can't activate this Gift Card for you as it was not created on our system. </p> <p>The best action to take in this situation is to bring the Gift Card back to the point of purchase with your receipt, where the retailer can reissue a new card for you to use. If you no longer have the receipt, please contact the shop where the card was purchased to resolve this. </p> <p>Should you require any additional information or assistance, please do not hesitate to contact us. </p> <p>Once again, please let me apologies for any inconvenience this has caused. It is never our intention to cause any sort on inconvenience to our valued customers like you. </p> <p>We look forward to seeing you again soon. </p> <p><strong><strong>25/12/16</strong> 21/31 from me</strong></p> <p>No, you have misunderstood again. </p> <p>The account you reference (<a>omitted@omitted.co.uk</a>) is not the purchaser of the gift cards but the recipient! The card you site as having been ‘purchased’ was not purchased at all but was redeemed. It is the one card that did work (as per email trail below). The card we are talking about here did not work. </p> <p>I have not given you the purchaser’s account name because I do not have it. <br>What I can say with a certainty is that both cards were purchased at the same time. They both arrived together in the same package direct from Amazon, not a 3rd party. THERE IS NO 3RD PARTY RETAILER TO CONTACT. AMAZON WAS THE RETAILER OF BOTH CARDS. </p> <p>I suggest you credit my son’s account (<a>omitted@omitted.co.uk</a>), the intended recipient of the card, with the £20 value and cancel the card itself. You can then take whatever steps are necessary to sort out the confusion at your end. </p> <p>If you are unable to resolve this matter – which is of your making – please provide details of how I may make a formal complaint. At this point Amazon have taken £20 for nothing in return. </p> <p><strong><strong>25/12/16</strong> 23:02 from Amazon Customer Services</strong></p> <p>Hello Andy,</p> <p>I'm sorry you weren't able to redeem the Gift Card to your account.</p> <p>Please accept my sincere apologies for any inconvenience caused by this.</p> <p>If you're able to see the any 3 consecutive digits of the claim code, other than the first 2 or last 4, please reply to this e-mail with these numbers along with the 16 digit card number located on the back of the card.</p> <p>If you're not able to provide 3 consecutive digits, please reply to this e-mail with the serial number and attach a scanned copy of the card as a PDF, JPG or PNG file.<br>Please reply to this e-mail with the serial number located on the back of the card and attach a scanned copy of the card as a PDF, JPG or PNG file.</p> <p>Once received, we'll attempt to validate the card and claim it to the account associated with this e-mail address.</p> <p>As a representative of Amazon.co.uk, I want to assure you that as our valued customer, your satisfaction is our top priority and be assured that your future order would better reflect our commitment to your satisfaction.</p> <p>I highly appreciate your patience, cooperation and understanding in this matter.<br>If you need any further information or assistance, please let us know by replying to this e-mail so that we'll be happy to help you further.</p> <p>We value your business with us and we are looking forward to serve you more in the future. </p> <p><strong><strong>25/12/16</strong> 23:50 from me</strong></p> <p>This is a disgrace. Just look how long this email trail is. You are asking for information ALREADY SUPPLIED. </p> <p>I have ALREADY SENT YOU A PDF of the card which includes the full serial number so why you are asking for that is beyond me. Anyway, I have reattached the PDF. The serial number is ****************, but you can see that in the PDF anyway. <p>And of course I can see the claim code. We have been trying to enter it into your system to redeem the card. I am pretty sure you are making unreasonable requests now simply to string me along without you actually doing anything. Anyway, in answer to your question about the claim code here are 3 consecutive digits other than the first 2 or last 4: ***. You can check them against the attached PDF if you’ve got nothing better to do. They appear after the first hyphen in the claim code. <p>And as a reminder, there is no 3<sup>rd</sup> party retailer. Amazon sold the card. <p>Do not reply to me asking for any further information. YOU HAVE IT ALL. Credit my son’s account (<a href="mailto:omitted@omitted.co.uk">omitted@omitted.co.uk</a>) with the monies owed (£20) immediately. <p><strong><strong>26/12/16</strong> 04:45 from Amazon Customer Services</strong> <p>Hello,</p> <p>I'm sorry to learn about the issue you experienced in relation to the Gift Card. I've reviewed our previous correspondence with you.</p> <p>The information provided in our last message correctly represents our policy at this time.</p> <p>As my colleague previously mentioned, I’ve checked your account, and can see that according to our records the Gift Card wasn't activated by the retailer it was purchased from.</p> <p>Unfortunately, we can't activate this Gift Card for you as it was not created on our system.</p> <p>The best action to take in this situation is to bring the Gift Card back to the point of purchase with your receipt, where the retailer can reissue a new card for you to use. If you no longer have the receipt, please contact the shop where the card was purchased to resolve this.</p> <p>Your patience and understanding is greatly appreciated.</p> <p>If you still face any issue then I kindly ask you to get in touch via phone. This way, you can speak to our live customer support team who can ensure we resolve this concern to your satisfaction. I'm sorry we don't share account and order information through email address due to security reasons.</p> <p>I realise that, at this point, asking you to contact us again would be disappointing; however, we really feel that the best way to assist you with this concern is over the phone.</p> <p>We're available 7 days a week 06.00 to midnight, local UK time. Freephone (within the UK): 0800 496 1081 International customers can reach us at +44 (0) 207 084 7911.</p> <p>Amazon cares about our customers, and we're working to improve our service and selection.</p> <p>Your patience and understanding is highly appreciated in this matter.<br>I hope this helps. We look forward to seeing you again soon. </p> <p><strong><strong>26/12/16</strong> 10:31 from me</strong></p> <p>Why don’t any of you read the previous emails? <p>As I said many times, AMAZON IS THE RETAILER. <p>Anyway, you can read all about it here: <a href="http://www.andyfrench.info/2016/12/amazon-gift-cards-how-amazon-can-take.html">http://www.andyfrench.info/2016/12/amazon-gift-cards-how-amazon-can-take.html</a> <p>Pay the money you owe. <p><strong><strong>26/12/16</strong> 13:30 No email this time – phoned Amazon customer support</strong> <p>I called Amazon Customer Support but they refused to deal with me even though I tried explaining that my son was only 16. They insisted on talking to him direct. Frankly, that’s outrageous. <p>Anyway, a long call ensued with my son having to read all the numbers on the card several times to the customer services representative and answering many questions. <p>The outcome? Contact the person who bought the card and ask them to contact Amazon, despite the fact that I told them he’s out of the country on an extended holiday. <p>Fobbed off again. <p><strong><strong>26/12/16</strong> 16:30 Contacted direct by the Amazon Social Media Team </strong> <p><strong></strong></p> <p>OK, following a Tweet from me Amazon’s Social Media Team got in touch and sorted the issue out. Success at last and well done the nice Amazon lady who dealt with it. It was nice to talk to someone who could deviate from the script!</p>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.comtag:blogger.com,1999:blog-5868528425345912405.post-83765956463455852282016-10-03T14:51:00.001+01:002016-10-03T14:51:17.142+01:00Election algorithms for clustered software<h2>The problem</h2> <p>I’ve recently been looking at a problem with some software that was written to work in a cluster. This particular software service runs background jobs against a SQL Server database and in order to support fail-over scenarios the software was written to work in a cluster. Only one service instance (the <em>master</em>) was actually doing any work at any given time with the other instances (the <em>slaves</em>) providing redundancy in the case of a failure. In other words, one instance would be nominated as the master and would take responsibility for running the background jobs. If the master crashed or became unavailable one of the other instances in the cluster would take over as master. <p>From now on I’m going to continue to use the term <em>service instance</em> to describe a software comonent that participates in a cluster. Each service instance is probably a separate <em>process</em>. <p>The problem was that the mechanism used to elect and monitor the master was based on UDP broadcast, and broadcast is something that can be problematic in cloud-based environments such as AWS. Given there was a need to migrate this service to the cloud this was a significant issue. <p>At a high level the election algorithm being used by the cluster was for service instances to use UDP broadcast to exchange messages between themselves to agree which instance would be the master. Once the master had been nominated it took over the work of running the background jobs. The other service instances would then periodically poll the master to check that it was still alive. The first instance to find the master to be unavailable would claim the master role, take over the responsibility of running the jobs and broadcast the change in master. <p>The use of UDP broadcast in this context was useful because it meant that service instances didn’t need to know about each other. To use more direct addressing it would be necessary to store the addresses of all instances in the cluster in some form of registry or configuration. Configuration management across multiple environments is itself a challenge so reducing the amount of configuration can be an advantage. <p>However, in this case the use of UDP broadcast was an issue that needed to be addressed to facilitate a move to the cloud. This provided a good opportunity to review clustering election patterns and approaches to writing clustered software in general to see what options are available. <p>Note: There are alternatives to creating writing software that behave as a cluster natively (e.g. ZooKeeper). This article does not deal with these alternative approaches but focuses on the creation of natively clustered software. <h2>Reasons for clustering</h2> <p>There are typically 2 reasons for writing software that supports clustering: <ul> <li>Failover – to prevent outages it would be advantageous to build in redundancy so that if one service instance crashes there’s another available to take up the slack. Note that in this case it isn’t necessary for all instances to be doing useful work. Some may be on stand-by, available to take over if the primary fails but not doing anything while the primary is active. <li>Performance – to facilitate greater application performance running separate software instances (probably on separate servers) may be advantageous. In this case work can be distributed between instances and processed in parallel.</li></ul> <p> <p>Of course, these two aspects are not mutually exclusive; a cluster may support both high availability and distributed processing. <h2>Characteristics of clustered software</h2> <p>Typically when running software as a cluster one instance will be nominated as the <em>coordinator</em> (<em>leader </em>or <em>master</em>). Note that this instance does not have to perform the work itself, it may choose to delegate the work to one of the other instances in the cluster. Alternatively – such as in our example above – the coordinator may perform the work itself exclusively. <p>This is somewhat analogous to server clustering which can be either symmetrical or asymmetrical. In the symmetrical case every server in the cluster is performing useful work. To distribute work between the servers in the cluster a load balancer is required. In the case of a software cluster it’s the instance elected as the coordinator that’s probably performing this task. <p>In the asymmetrical case only one server will be <em>active</em> with the other server instances in the cluster being <em>passive</em>. A passive instance will only be activated in the event of a failure of the primary. In the case of a software cluster the coordinator would be the active instance with other instances being passive. <p>Whichever basic topology is chosen it will be necessary for the software cluster to elect a coordinator when the cluster starts. It will also be necessary for the cluster to recognise when a coordinator has crashed or become unavailable and for this to trigger the election of a new coordinator. <p>When designing a system like this care should be taken to avoid the coordinator becoming a bottleneck itself. There are also other considerations. For example, in auto-scaling scenarios what happens if the coordinator is shut down as a result of downsizing the infrastructure? <h2>Election patterns</h2> <p>How do software clusters go about managing the election of a coordinator? Below is a discussion of 3 possible approaches: <ul> <li>Distributed mutex – use a shared mutex is made available to all service instances and is used to manage which instance is the coordinator. Essentially, all service instances race to grab the mutex. The first to succeed becomes the coordinator. <li>Bully algorithm – use messaging between instances in the cluster to elect the coordinator. The election is based on some unique property of each instance (e.g. a process identifier). The process with the highest value ‘wins’. The winning instance bullies the other instances into submission by keeping the mutex and claiming the coordinator role. <li>Ring algorithm – use messaging between instances in the cluster to elect the coordinator. Service instances are ordered (either physically or logically) so each instance knows its <em>successors</em>. Ordering in the ring is significant with election messages being passed around the ring to figure out which one is ‘at the top’. That instance is elected the coordinator.</li></ul> <p> <p>More detailed descriptions of the approaches are provided below. As you’d expect each has its pros and cons. <h3>Distributed mutex</h3> <p>A mutex “ensures that multiple processes that share resources do not attempt to share the same resource at the same time”. In this case the ‘resource’ is really a role – that of coordinator - that one service instance adopts. <p>Using a distributed mutex has the advantage that it works in situations where there is no natural leader (e.g. no suitable process identifier which would be required for the Bully Algorithm). Under some circumstances (e.g. when the coordinator is the only instance performing any work) the service instances need not know about each other either; the shared mutex is the only thing an instance needs to know about. In cases where the coordinator needs to distribute work amongst the other instances in the cluster then the coordinator must be able to contact – and therefore know about – the other instances. <p>The algorithm essentially follows this process: <ol> <li>Service instances race to get a lease over a distributed mutext (e.g. a database object). <li>The first instance to get the mutex is elected as the coordinator. Other instances are prevented from becoming the coordinator because they are blocked from getting a lease on the mutex. <li>The coordinator performs the task of coordingating the distribution of work (or executing it itself depending on requirements). <li>The lease must be set to expire after a period of time and the coordinator must periodically renew the lease. If the coordinator crashes or becomes unavailable it won’t be able to renew the lease on the mutext which will eventually become available again. <li>All service instances periodically check the mutex to see if the lease has expired. If a service instance finds the lease on the mutex to be available it attempts to secure the lease. If it succeeds the instance becomes the new coordinator.</li></ol> <p> <p>Note that the mutext becomes a potential single point of failure so consideration should be given to a scenario where unavailability of the mutex can prevent the cluster from electing a coordinator. <p>Another characteristic of using a shared mutex in this way is that election of the leader is non-deterministic. Any service instance in the cluster could take on the role of coordinator. <p>A good explanation of the shared mutex approach can be found in <a href="https://msdn.microsoft.com/en-gb/library/dn568104.aspx">this article from MSDN</a>. <h3>Bully algorithm</h3> <p>There are some assumptions for the Bully Algorithm: <ul> <li>Each instance in the cluster has a unique identifier which must be an ordinal. This could be a process number or even a network address but whatever it is we should be able to order instances in the cluster using this identifier. <li>Each instance knows the identifiers of the other instances that should be participating in the cluster (some may be dead for whatever reason). <li>Service instances don’t know which ones are available and which are not. <li>Service instances must be able to send messages to each other.</li></ul> <p> <p>The basis of the Bully Algorithm is the service instance with the highest identifier will be the coordinator. The algorithm provides a mechanism for service instances to discover which of them has the highest identifier and for that instance to bully the others into submission by claiming the coordinator role. It follows this basic process: <ol> <li>A service instance sends an ELECTION message to all instances <em>with identifiers greater than its own</em> and awaits responses. <li>If no service instances respond the originator can conclude it has the highest identifier and is therefore safe to assume the role of coordinator. The instance sends a COORDINATOR message to all other instances announcing the fact. Other instances will then start to periodically check that the coordinator is still available. If it isn’t, the instance that finds the coordinator unavailable will start a new election (back to step 1). <li>Any service instance receiving an ELECTION message and having an identifier greater than the originator will respond with an OK message indicating it’s available. <li>If in response to an ELECTION message the originator receives an OK response back it knows there’s at least one service instance with a higher identifier than itself. The following then happens: <ol> <li>The original service instance abandons the election (because it knows there’s at least one process with a higher identifier than itself). <li>Any instances that responded to the ELECTION message with OK now issue ELECTION messages themselves (they start at step 1) and the process repeats until the service with the highest identifier has been elected. </li></ol></li></ol> <p>A nice description of the process can be found <a href="http://www.cs.colostate.edu/~cs551/CourseNotes/Synchronization/BullyExample.html">in this article</a>. <h3>Ring algorithm</h3> <p>As with the Bully Algorithm there are some basic assumptions for the Ring Alorithm. <ul> <li>The service instances are ordered in some way. <li>Each service instance uses the ordering to know who its <em>successor</em> is (in fact it needs to know about all the instances in the ring, as we will see below). </li></ul> <p> <p>The Ring Algorithm basically works like this: <ol> <li>All service instances monitor the coordinator. <li>If any service instance finds the coordinator is not available it sends an ELECTION message to its successor. If the successor is not available the message is sent to the next instance in the ring until an active one is found. <li>Each service instance that receives the ELECTION message adds its identifier to the message and passes it on as in step 2. <li>Eventually the message gets back to the originating process instance which recognises the fact because its own identifier is in the list. It examines the list of active instances and finds the one with the highest identifier. The instance then issues a COORDINATOR message informing all the instances in the ring which one is now coordinator (the one with the highest identifier). <li>The service instance with the highest identifier has now been elected as the coordinator and processing resumes.</li></ol> <p> <p>Note that multiple instances could recognise that the coordinator is unavailable resulting in multiple ELECTION and COORDINATOR messages being sent around the ring. This doesn’t matter, the result is the same. <h2>Other things to look at</h2> <p>A NuGet package is available for a light-weight non-intrusive leader election library for .Net called NanoCluster. Source code is available on GitHub here: <p><a href="https://github.com/ruslander/NanoCluster">https://github.com/ruslander/NanoCluster</a> <p>It’s a small project and doesn’t seem to have been used a great deal but might provide some ideas. <h2>References</h2> <ul> <li>Cloud Networking: IP Broadcasting and Multicasting in Amazon EC2 - <a href="https://www.ravellosystems.com/blog/cloud-networking-ip-broadcasting-multicasting-amazon-ec2/">https://www.ravellosystems.com/blog/cloud-networking-ip-broadcasting-multicasting-amazon-ec2/</a> <li>Amazon VPC FAQs - <a href="https://aws.amazon.com/vpc/faqs/#R4">https://aws.amazon.com/vpc/faqs/#R4</a> <li>Leader Election Pattern - <a href="https://msdn.microsoft.com/en-gb/library/dn568104.aspx">https://msdn.microsoft.com/en-gb/library/dn568104.aspx</a> <li>Apache ZooKeeper - <a href="https://zookeeper.apache.org/">https://zookeeper.apache.org/</a></li></ul>Andy Frenchhttp://www.blogger.com/profile/04783736934753019832noreply@blogger.com