the hot new programming language

On Thu, 02 Jul 2015 10:47:06 -0700, the renowned John Larkin
<jlarkin@highlandtechnology.com> wrote:

Yup. Accountants. Lawyers. Plastic surgeons.

Cobol was designed so that bankers could code. It was brilliant.

https://en.wikipedia.org/wiki/COBOL#History_and_specification

Two of the designers were women, who were apparently more interested
in solving a real problem than they were interested in playing mental
games. Compare Cobol to c or Pascal or APL.

To be fair here, there are very few* operating systems written in
COBOL. It was developed to solve other types of real problems.

I would compare it more to SNOBOL or PL/1.

* but not zero! https://en.wikipedia.org/wiki/BLIS/COBOL

--
Best regards,
Spehro Pefhany
Amazon link for AoE 3rd Edition: http://tinyurl.com/ntrpwu8
Microchip link for 2015 Masters in Phoenix: http://tinyurl.com/l7g2k48
 
On 03/07/2015 03:17, John Larkin wrote:
On Thu, 02 Jul 2015 17:04:49 -0400, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 7/2/2015 3:24 PM, John Larkin wrote:
On Thu, 02 Jul 2015 15:18:29 -0400, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 7/2/2015 3:00 PM, John Larkin wrote:
It's a shame that ADA wasn't as widely accepted. Programmers tend to
hate safe languages that make them be careful.

As someone who knows many folks who work in the software industry/game
design, it's kind of cute when EEs talk about things that they're so
very sure about...;-)

He has a part of a point, but at the time that C was taking off big time
most machines were pretty short of resources and C was a distinct
improvement over its predecessors like B and BCPL.

https://en.wikipedia.org/wiki/BCPL

How many buffer overrun vulnerabilities has Windows had so far? Round
your answer to the nearest thousand.

I would feel fairly hard-pressed to use a Microsoft product as some kind
of archetypical example to give insight into what "Programmers" think, haha

They are not all cowboys. MacConnell's Code Complete published by
Microsoft is quite a good introduction to defensive programming. It is
just a shame that they don't always practice what they preach.
Most big software projects are buggy messes or outright, often

The problem stems from their sheer complexity and the fact that the
customer tends not to really know what they want at the outset.

gigabuck, failures. I don't know why CS departments don't seem to
care. Somebody should figure out how to do good software, and teach
it.

They do care. Academia dumps their rejects into the software industry.

I am somewhat annoyed that in the UK our professional body is far too
supine about expressing disgust when government software projects go
haywire - almost inevitably due to some combination of inadequate
specifications, bad management, budget overrun and mission creep.

The Royal Society for Chemistry is a heck of a lot better at PR.

--
Regards,
Martin Brown
 
On 7/2/2015 8:39 PM, Phil Hobbs wrote:
On 7/2/2015 8:26 PM, bitrex wrote:
On 7/2/2015 6:42 PM, Phil Hobbs wrote:
On 7/2/2015 5:29 PM, bitrex wrote:
On 7/2/2015 5:20 PM, John Larkin wrote:
On Thu, 02 Jul 2015 17:12:36 -0400, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 7/2/2015 4:21 PM, John Larkin wrote:
On Thu, 02 Jul 2015 16:08:15 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 07/02/2015 01:47 PM, John Larkin wrote:
On Thu, 02 Jul 2015 11:21:42 +1000, Sylvia Else
sylvia@not.at.this.address> wrote:

On 2/07/2015 5:13 AM, John Larkin wrote:

http://www.itworld.com/article/2694378/college-students-learning-cobol-make-more-money.html




The revival of Basic is next.


Apparently people who can resist the urge to gnaw
their own leg off from boredom command a premium.

Sylvia.

Yup. Accountants. Lawyers. Plastic surgeons.

Cobol was designed so that bankers could code. It was
brilliant.

https://en.wikipedia.org/wiki/COBOL#History_and_specification



Two of the designers were women, who were apparently more
interested in solving a real problem than they were
interested in playing mental games. Compare Cobol to c
or Pascal or APL.


And run screaming in the other direction. Cobol is
verbose and inflexible. Just the sheer amount of typing
would slow me down a lot.

C was described as designed by geniuses to be used by
geniuses. But most programmers aren't geniuses. Most people
need hard typing and runtime bounds checking and proper
memory management to keep out of trouble; they need
verbose. I cite basically all Microsoft products.


They weren't geniuses, they just knew that they couldn't do
fucking runtime bounds checking and "proper" memory
management on a PDP-11 with as much processing power as a
modern clock radio

That was 40 years ago.

Actually, the 11 had great memory management hardware, but c
isn't designed to be able to use it. Everything gets mixed up.


Right, the main issue is what it always is, of course: nobody
ever expected the language to be as long-lived as it was, and
then once it becomes apparent that it actually is going to be
around for a long time they can't update it or add many new
features for fear of breaking backwards-compatability for a bunch
of legacy shit


Riiighhttt. Which is why C++11 is just the same as K&R 1.0.

Cheers

Phil Hobbs

Sure, you can compile some C code with a C++ compiler, but modern C++
and ordinary C are so distant as to be barely recognizable as the
same language.

Except that with very rare exceptions, all ISO C programs are also valid
C++ programs. Even old-time K&R code can be compiled by Visual C++,
gcc, Intel C++, and every other variant I know of.

Unfortunately, breaking changes in C++11 and other versions with respect
to legacy code are actually nowhere near as rare as one would like.

Pretty far from "they can't update it or add many new features".

And one could very easily argue that the mess that is C++ is
_precisely_ the reason you shouldn't take a 40 year old language and
start trying to tack all sorts of modern features onto it...

I like C++ very much, because it's very general and very powerful. I'm
all ears for your arguments to the contrary.

I absolutely don't disagree with that statement, so I have no arguments
to the contrary. But it's also a very complicated (some might say
convoluted) language with an enormous amount of features packed in. For
my part, when I do use C++ I'd say I only use perhaps 10-20% of the
features available over standard ISO C99.

In complicated projects working with classes and objects is very nice,
being able to define abstract class types and "virtual" methods is
pretty cool, default arguments to functions, some operator overloading,
having a few more sophisticated "container" data types like stacks and
queues available in libraries (though in embedded you will often take a
massive performance hit for using them), and for embedded, the ability
to allocate memory at startup and use "placement new" to put dynamic
objects exactly where you want them in memory to avoid fragmentation is
also pretty cool.

But then there are parts that just feel awful...like a few months ago I
had to write some code using template types and object constructors that
worked with template types and I just hated it. I'll have to go back
and look at the code again I'm sure because, while it worked, I really
didn't have a huge clue as to what I was doing and I quickly forgot the
syntax because it seemed so unnatural.

The argument I will make is that while C++ is very general and very
powerful, except for situations like embedded, high performance graphics
engine/game engine programming, and high performance
database/finance/web systems applications, it is the wrong tool for most
new projects.

Where that kind of performance isn't required, like, why not use C#? I
don't know that much C#, but from what I've explored it seems like a
breeze and a pleasure. Or when you don't even need a compiled language,
use Python. If you're OK with an interpreted language, you can often
get more accomplished in ten lines of Python than you could in 500 of C++.

And even in some of the cases I mentioned, like finance and systems
programming, C++ is starting to be edged out for new projects in favor
of newer compiled languages, like Rust and Go, that combine the
straightforward syntax of regular C and "batteries included" and ease of
use philosophy of interpreted languages like Python, with garbage
collection, intrinsic support for threading and multiprocessing and type
safety, and a compiler that generates blazing fast code.

I think C++ still has a place in 2015; however when you look around at
the other stuff that's out there, to me it seems that saying you really
really "like" it might be sort of a case of Stockholm syndrome.
 
On 03/07/2015 01:44, Clifford Heath wrote:
On 03/07/15 05:24, John Larkin wrote:
On Thu, 02 Jul 2015 15:18:29 -0400, bitrex
As someone who knows many folks who work in the software industry/game
design, it's kind of cute when EEs talk about things that they're so
very sure about...;-)

How many buffer overrun vulnerabilities has Windows had so far? Round
your answer to the nearest thousand.

None, because some more careless company would have put Microsoft out of
business. We have the "quality" we were prepared to pay for, more or less.

Clifford Heath.

Sadly I have to concede that you have a point there.

OS/2 was a lot more robust but IBM made a total mess of selling it!

The problem is that modern management views time to market as the
priority to get sales bonuses. It is only software after all and you can
sell "support" to the poor suckers or maybe a years free updates.

No-one would dream of doing it for physical hardware with product
recalls, messy patch wires and oudles of conformal coating.

There are static analysis tools that can find latent bugs in a codebase
without executing it by using dataflow analysis. Looking for any paths
through the code where variables are used before first being assigned a
value or where a nul pointer could be dereferenced.

Such bugs can lurk for a long time along seldom executed paths (often
but not always the error recovery for some critical system failure).

--
Regards,
Martin Brown
 
On 03/07/2015 02:43, Joe Gwinn wrote:
In article <mn4mc1$o6m$1@dont-email.me>, rickman <gnuarm@gmail.com
wrote:

On 7/2/2015 8:20 PM, Joe Gwinn wrote:
In article <3e2bpap8e70o7mnep2joboecjrdsggpp96@4ax.com>, John Larkin
jlarkin@highlandtechnology.com> wrote:

It's a shame that ADA wasn't as widely accepted. Programmers tend to
hate safe languages that make them be careful.

In the day, I was one of the many programmers who suffered through
Ada83. I made a little career of making Ada run fast enough to be
plausible in realtime applications like radars.

Ada was too big and complicated for it's own good. Modula2 wasn't bad
for a minimalist hard realtime language with separate compilation of
modules and provision for low level system operations by design.

All the mainframe compilers in that era apart from the most expensive
high end FORTRAN had pretty lousy optimisers.

The method was simple but brutal - remove all of Ada that didn't look
exactly like Pascal, and if that wasn't enough, resort to assembly.
This was still Ada enough to qualify as Ada, and to meet the DoD
mandate.

Datapoint: My team implemented what would now be called middleware in
a severe subset of Ada83 plus some assembly code on a DEC VAX to
replace a pure-Ada message communications infrastructure. The subset
Ada plus assembly approach was literally ten times faster than the pure
Ada, and saved the project.

By the time Ada95 came out, it was too late - C/C++ had won.

Ada died because it was designed by academics who had no notion of
realtime, and thus made blunder after blunder, such that not even a DoD
Mandate could save Ada.

I think that is a somewhat jaundiced view of the situation (although I
am not a particular fan of Ada). Algol68 had similar speed complexity
tradeoffs in a previous era. Fortran compiler could run rings around it
performance wise but for algorithm development it was still very useful.

If you are an Ada expert, I will defer to your judgement. But was the
speed problem with Ada inherent in the language (therefor the fault of
the language designers) or was it just the shortcomings of the
implementations?

It was the design of Ada83 the language, which forced all Ada83
compilers to be clumsy, and to emit slow code. There are many details.

I think you will have to elaborate on that point. Offhand I can't see
anything in the Ada language specification apart from its immense size
and the enormous validation suite that forced Ada compilers to be clumsy
and emit slow code. I will concede that in the era where C was in the
ascendency a great deal more effort was expended on its code optimiser
than on any other language apart from Fortran.

What could have changed in Ada95 that would speed up
implementations?

Fixing the many blunders of Ada83 was a large part of it. (There was
lots of field experience by then.) Adding true multiprocessor support
and the ability to control hardware was another. Adding support for
object-oriented programming (borrowed from C++). Etc.

All successful programming languages so far have had the following
things in common: Developed by a handful of people who wrote the
compiler at the same time. Not a committee writing a 300-page language
description in a vacuum. The early language and compilers co-evolved
as real users (typically the colleagues of the handful) used the
language for real work, and brought the problems back to the handful,
who fixed the problem by changing the language if needed., for a few
full versions. Full darwinist selection in the marketplace beyond the
handful and their colleagues - hundreds of languages qualify for the
common characteristics named before, but only a few languages achieve
more than a few hundred users. After such a language has succeeded in
the marketplace for some time, and has matured, it is formally
standardized (and thus frozen).

I don't disagree that most of the powerful languages started out as
someone's pet project that gained momentum locally and then globally.
Ada83 violated all of the above.


I know C has gotten faster over the years to the point
where it is hard to write hand optimized assembly that is faster. At
least that is what I read. I don't use C much these days and I can't
remember the last time I wrote assembly other than for stack machines
with a very simple assembly language.

Assembly language is always faster than compiled language, if competent
assembly programmers are used. And assembly can do things that are
very difficult in any compiled language, with direct hardware control
being a traditional area. But the cost is about four times that of
compiled C, so assembly is used sparingly. The UNIX kernel was about
4% assembly.

That isn't true any more with profile directed optimising compilers. The
compiler can often beat all but the very best handpicked assembler
practitioners. Register colouring, speculative execution and pipeline
stalls mean that most people cannot see the wood for the trees now. The
compiler (with the right CPU optimising flags set) does it seemlessly.
There is also a nasty trick: write the code in C, and look at the
resulting generated assembly code. If it's worse than what one can
write manually, paraphrase the C code and try again. Eventually, the
optimum paraphrase will be found. In practice, C compilers are
sufficiently alike the a good paraphrase works on most compilers.

A surprising number of modern compilers will reduce common loop
constructs to almost identical canonical code. Certain constructs are
faster on some Intel CPUs than on others so it chooses the right one.

I use mostly VHDL for hardware design which is very Ada like and Forth
for apps which is like a stack machine assembly language. A bit of a
strange dichotomy, but there is no Forth-like HDL.

VHDL was expressly based on Ada, and Verilog on C.

FORTH is a pure stack language, most resembling a HP calculator (with
RPN), and was developed for machine control by an astronomer. The
machines being controlled are such as large telescopes.

I was very interested in FORTH when it went public and they were being
very close-mouthed. I ended up developing my own prototype language,
which I called Fifth. The interpreter core was written in assembly, as
that was how to get the necessary speed. But I found Fifth to be too
limiting for what I was then doing, and never pursued it.

I don't remember it being all that closed mouthed. It was in relatively
wide use in the late 1970's for telescope control and I thought the
magnetic tapes were circulating fairly freely in that community.

There was even a Forth for the venerable BBC micro in about 1983.

--
Regards,
Martin Brown
 
On 07/03/2015 08:54 AM, bitrex wrote:
On 7/2/2015 8:39 PM, Phil Hobbs wrote:
On 7/2/2015 8:26 PM, bitrex wrote:
On 7/2/2015 6:42 PM, Phil Hobbs wrote:
On 7/2/2015 5:29 PM, bitrex wrote:
On 7/2/2015 5:20 PM, John Larkin wrote:
On Thu, 02 Jul 2015 17:12:36 -0400, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 7/2/2015 4:21 PM, John Larkin wrote:
On Thu, 02 Jul 2015 16:08:15 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 07/02/2015 01:47 PM, John Larkin wrote:
On Thu, 02 Jul 2015 11:21:42 +1000, Sylvia Else
sylvia@not.at.this.address> wrote:

On 2/07/2015 5:13 AM, John Larkin wrote:

http://www.itworld.com/article/2694378/college-students-learning-cobol-make-more-money.html





The revival of Basic is next.


Apparently people who can resist the urge to gnaw
their own leg off from boredom command a premium.

Sylvia.

Yup. Accountants. Lawyers. Plastic surgeons.

Cobol was designed so that bankers could code. It was
brilliant.

https://en.wikipedia.org/wiki/COBOL#History_and_specification



Two of the designers were women, who were apparently more
interested in solving a real problem than they were
interested in playing mental games. Compare Cobol to c
or Pascal or APL.


And run screaming in the other direction. Cobol is
verbose and inflexible. Just the sheer amount of typing
would slow me down a lot.

C was described as designed by geniuses to be used by
geniuses. But most programmers aren't geniuses. Most people
need hard typing and runtime bounds checking and proper
memory management to keep out of trouble; they need
verbose. I cite basically all Microsoft products.


They weren't geniuses, they just knew that they couldn't do
fucking runtime bounds checking and "proper" memory
management on a PDP-11 with as much processing power as a
modern clock radio

That was 40 years ago.

Actually, the 11 had great memory management hardware, but c
isn't designed to be able to use it. Everything gets mixed up.


Right, the main issue is what it always is, of course: nobody
ever expected the language to be as long-lived as it was, and
then once it becomes apparent that it actually is going to be
around for a long time they can't update it or add many new
features for fear of breaking backwards-compatability for a bunch
of legacy shit


Riiighhttt. Which is why C++11 is just the same as K&R 1.0.

Cheers

Phil Hobbs

Sure, you can compile some C code with a C++ compiler, but modern C++
and ordinary C are so distant as to be barely recognizable as the
same language.

Except that with very rare exceptions, all ISO C programs are also valid
C++ programs. Even old-time K&R code can be compiled by Visual C++,
gcc, Intel C++, and every other variant I know of.

Unfortunately, breaking changes in C++11 and other versions with respect
to legacy code are actually nowhere near as rare as one would like.

Pretty far from "they can't update it or add many new features".

And one could very easily argue that the mess that is C++ is
_precisely_ the reason you shouldn't take a 40 year old language and
start trying to tack all sorts of modern features onto it...

I like C++ very much, because it's very general and very powerful. I'm
all ears for your arguments to the contrary.


I absolutely don't disagree with that statement, so I have no arguments
to the contrary. But it's also a very complicated (some might say
convoluted) language with an enormous amount of features packed in. For
my part, when I do use C++ I'd say I only use perhaps 10-20% of the
features available over standard ISO C99.

In complicated projects working with classes and objects is very nice,
being able to define abstract class types and "virtual" methods is
pretty cool, default arguments to functions, some operator overloading,
having a few more sophisticated "container" data types like stacks and
queues available in libraries (though in embedded you will often take a
massive performance hit for using them), and for embedded, the ability
to allocate memory at startup and use "placement new" to put dynamic
objects exactly where you want them in memory to avoid fragmentation is
also pretty cool.

But then there are parts that just feel awful...like a few months ago I
had to write some code using template types and object constructors that
worked with template types and I just hated it. I'll have to go back
and look at the code again I'm sure because, while it worked, I really
didn't have a huge clue as to what I was doing and I quickly forgot the
syntax because it seemed so unnatural.

I hear you.

C++ isn't really one language, it's a whole bunch of them crammed in
together. There's no reason to use parts of it you don't like. I
mostly use it as "C with references, classes and templates", but am
using more of the standard library as time goes on. I've never got into
iostreams, because formatting output is so clumsy compared with printf()
and so on. (I have my own parsing library, so I don't have to use
fscanf(), which is the really ugly part of stdio.)

That style is an excellent fit for what I do with it, namely
console-mode programs for instrument control, big-iron simulators, and
lab automation. I write the stuff that has to be fast or do complicated
down-on-the-metal things in C++, and often put the slow things in a REXX
wrapper script that calls gnuplot or gwenview or whatever graphics
program I'm using.

There's not much likelihood that I'm ever going to need functors or
lambdas or template metaprogramming.

The argument I will make is that while C++ is very general and very
powerful, except for situations like embedded, high performance graphics
engine/game engine programming, and high performance
database/finance/web systems applications, it is the wrong tool for most
new projects.


Where that kind of performance isn't required, like, why not use C#? I
don't know that much C#, but from what I've explored it seems like a
breeze and a pleasure. Or when you don't even need a compiled language,
use Python. If you're OK with an interpreted language, you can often
get more accomplished in ten lines of Python than you could in 500 of C++.

I might learn Python this summer if things slow down a bit, but it's
really slow. My son Simon is using the CERN math toolkit (whose name I
forget--Jeroen, are you there?) and says it's powerful like Python but
about 100 times as fast. (He's working on a dark matter detection
experiment with 250 22-inch PMTs and a bunch of FPGAs and stuff.) I
suspect I may be better off improving my Octave skills, which are very
rusty.
And even in some of the cases I mentioned, like finance and systems
programming, C++ is starting to be edged out for new projects in favor
of newer compiled languages, like Rust and Go, that combine the
straightforward syntax of regular C and "batteries included" and ease of
use philosophy of interpreted languages like Python, with garbage
collection, intrinsic support for threading and multiprocessing and type
safety, and a compiler that generates blazing fast code.

GC has been available in C++ for aeons--Hans Boehm wrote it as part of
Boost. And smart-pointer-based RAII makes it pretty well redundant
anyway. I haven't had a memory leak in forever.

I think C++ still has a place in 2015; however when you look around at
the other stuff that's out there, to me it seems that saying you really
really "like" it might be sort of a case of Stockholm syndrome.

Nah, C++ is a pleasure, at least my flavour of C++. Plus it'll be
awhile before fancy new languages are common in embedded systems,
whereas C/C++ is ubiquitous.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On Fri, 03 Jul 2015 11:52:46 +0100, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

On 01/07/2015 20:35, Lasse Langwadt Christensen wrote:
Den onsdag den 1. juli 2015 kl. 21.13.49 UTC+2 skrev John Larkin:

http://www.itworld.com/article/2694378/college-students-learning-cobol-make-more-money.html

Only so long as Cobol programmers are in short supply.
These things run in cycles.

The revival of Basic is next.


I doubt it, the reason to teach people cobol is that there is still a
ton of cobol code in use and I assume those who originally learned it
is getting a
bit grey

The problem is in maintaining code which is critical to the banks core
clearing operations (as opposed to the snazzy trader floor stuff).

Envision trillion dollar bugs. I suspect it happens all the time, and
they just clean up afterwards.

They don't spend much on the legacy systems and they do tend to have
spectacular MFUs from time to time. Some banks more than others.

I saw an article a couple of years ago, something like 75% of all business transactions,
90% of financial transactions is still done on cobol.
200 billion lines of code running, 5 billion lines added every year


-Lasse


If you want to sell your soul for maximum financial gain then
destabilising the global stock trading systems with sophisticated high
frequency trading algorithms is definitely the way to go.

Yeah, a good fraction of MIT grads get sucked up for that.

One guy in the UK in his parents bedroom can allegedly do this:

http://www.bbc.co.uk/news/business-32415664

People seem to get upset if you are too good at it!

Terrifying instability.


--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
Den fredag den 3. juli 2015 kl. 20.35.39 UTC+2 skrev DecadentLinuxUserNumeroUno:
On Fri, 03 Jul 2015 10:45:15 -0700, John Larkin
jlarkin@highlandtechnology.com> Gave us:


The real advantage, to me, of using Spice here, is evaluating the
magnetics. Magnetics tend to be a pain. Spice computes the peak and
RMS coil currents, something I prefer not to do any other way.

Their problem is that there are few, if not no models of xformers with
a single turn feedback winding in their libraries, and adding one
doesn't work, because it needs to be magnetically coupled. So even
their modeling structure needs work.

That is one of the main reasons I do not model my supplies with it, as
I *do* use a feedback winding.

you just add the inductors and tell it what the coupling between is

http://cds.linear.com/docs/en/lt-journal/LTMag-V16N3-23-LTspice_Transformers-MikeEngelhardt.pdf

-Lasse
 
On 7/3/2015 9:58 AM, Phil Hobbs wrote:

Where that kind of performance isn't required, like, why not use C#? I
don't know that much C#, but from what I've explored it seems like a
breeze and a pleasure. Or when you don't even need a compiled language,
use Python. If you're OK with an interpreted language, you can often
get more accomplished in ten lines of Python than you could in 500 of C++.

I might learn Python this summer if things slow down a bit, but it's
really slow. My son Simon is using the CERN math toolkit (whose name I
forget--Jeroen, are you there?) and says it's powerful like Python but
about 100 times as fast. (He's working on a dark matter detection
experiment with 250 22-inch PMTs and a bunch of FPGAs and stuff.) I
suspect I may be better off improving my Octave skills, which are very
rusty.

You should definitely check out Python. SciPy and Numpy are the
scientific computing and numerical packages for Python respectively -
basically what it ends up is having a nice Python scripting language
interface to a scientific computing backend that's compiled FORTRAN code
which is pretty fast and supports multithreading/multiprocessing.

You can even buy an inexpensive package that will take your blob of
Python scripting/FORTRAN and distill it down into an optimized
implementation that can run in parallel on GPU compute clusters.

After writing a few computing applications in SciPy with all the
niceties of Python, going back to MATLAB flavored stuff just hurts.


And even in some of the cases I mentioned, like finance and systems
programming, C++ is starting to be edged out for new projects in favor
of newer compiled languages, like Rust and Go, that combine the
straightforward syntax of regular C and "batteries included" and ease of
use philosophy of interpreted languages like Python, with garbage
collection, intrinsic support for threading and multiprocessing and type
safety, and a compiler that generates blazing fast code.

GC has been available in C++ for aeons--Hans Boehm wrote it as part of
Boost. And smart-pointer-based RAII makes it pretty well redundant
anyway. I haven't had a memory leak in forever.

Yeah, but ugh...Boost. That's a whole box of its own problems right there.

Cheers

Phil Hobbs
 
Also, while Python's "batteries included" philosophy means there are
about a billion different packages available for import with all sorts
of calls and methods, the core feature set/keywords of Python is fairly
small...

https://docs.python.org/2/library/functions.html

....and the "Pythonic" way of writing code (using lists, dictionaries,
tuples, and sets, just about everything is an immutable object with
methods and attributes, you can iterate over the items in container
objects and use generator expressions to create new objects, etc.) is
fairly elegant and easy to pick up.
 
On Fri, 03 Jul 2015 13:38:02 +0100, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

On 02/07/2015 23:07, rickman wrote:
On 7/2/2015 6:03 PM, John Larkin wrote:

We can always buy faster CPUs.

That is literally the stupidest thing I've ever seen come from you.


I don't often defend Johns comments but on this he does have a point.
For most home and office kit these days the CPU power available is so
huge that efficiency literally does not matter in end user code.

Maybe ricky runs a lot of 3D EM simulations. Or maybe he games a lot.

I only notice CPU runtime rarely, in Spice sims of oscillators and
switchers. Most Spice sims run in zero subjective time. And they are
giving me a new PC soon, 5x or so faster than what I have. Most people
run Facebook and Twitter and texting, so a small ARM in the corner of
a chip is plenty of compute power.

I run ATLC once in a great while, which is compute intensive, but I
doubt that I've totaled four hours computing in the last year, and I
can always do something else, clean my office maybe, while it's
running. Upgrading my PC will take me more time that I might ever get
back.

Intel must appreciate that the water-cooled gigaflop desktop biz can't
be the future. They just canned a bunch of top management.

The notable exceptions are video editing and 3D gaming which really do
push the performance envelope. Word processing and general stuff you can
trade a few percent of speed for safety without any problems.

Fast CPUs are cheap and getting cheaper in line with Moore's law whereas
good software engineers are rare, expensive and getting more so.

So it makes sense to trade runtime efficiency (as in, Python type
things) for program quality and security.

TBH I am amazed that Moore's law has held good for as long as it has.

The wall isn't too far away. EUV is still in doubt. Maybe the Moore's
law feature-size progression will die from lack of demand, rather than
lack of supply. If all you need is a hundred million transistors on a
chip, there's no economic upside to doing that with 7 nm features and
insane mask costs. About the only app that benefits from extreme
resolution is memory.



--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Fri, 03 Jul 2015 13:51:05 +0100, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

On 03/07/2015 03:17, John Larkin wrote:
On Thu, 02 Jul 2015 17:04:49 -0400, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 7/2/2015 3:24 PM, John Larkin wrote:
On Thu, 02 Jul 2015 15:18:29 -0400, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 7/2/2015 3:00 PM, John Larkin wrote:
It's a shame that ADA wasn't as widely accepted. Programmers tend to
hate safe languages that make them be careful.

As someone who knows many folks who work in the software industry/game
design, it's kind of cute when EEs talk about things that they're so
very sure about...;-)

He has a part of a point, but at the time that C was taking off big time
most machines were pretty short of resources and C was a distinct
improvement over its predecessors like B and BCPL.

https://en.wikipedia.org/wiki/BCPL

40 years ago.


How many buffer overrun vulnerabilities has Windows had so far? Round
your answer to the nearest thousand.

I would feel fairly hard-pressed to use a Microsoft product as some kind
of archetypical example to give insight into what "Programmers" think, haha

They are not all cowboys. MacConnell's Code Complete published by
Microsoft is quite a good introduction to defensive programming. It is
just a shame that they don't always practice what they preach.

Amen!


--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Fri, 03 Jul 2015 14:03:34 +0100, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

On 03/07/2015 01:44, Clifford Heath wrote:
On 03/07/15 05:24, John Larkin wrote:
On Thu, 02 Jul 2015 15:18:29 -0400, bitrex
As someone who knows many folks who work in the software industry/game
design, it's kind of cute when EEs talk about things that they're so
very sure about...;-)

How many buffer overrun vulnerabilities has Windows had so far? Round
your answer to the nearest thousand.

None, because some more careless company would have put Microsoft out of
business. We have the "quality" we were prepared to pay for, more or less.

Clifford Heath.

Sadly I have to concede that you have a point there.

OS/2 was a lot more robust but IBM made a total mess of selling it!

The problem is that modern management views time to market as the
priority to get sales bonuses. It is only software after all and you can
sell "support" to the poor suckers or maybe a years free updates.

No-one would dream of doing it for physical hardware with product
recalls, messy patch wires and oudles of conformal coating.

It's impressive that the software industry is able to evade legal
liability for the damage they do.


--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Fri, 03 Jul 2015 08:41:14 -0400, Spehro Pefhany
<speffSNIP@interlogDOTyou.knowwhat> wrote:

On Thu, 02 Jul 2015 10:47:06 -0700, the renowned John Larkin
jlarkin@highlandtechnology.com> wrote:


Yup. Accountants. Lawyers. Plastic surgeons.

Cobol was designed so that bankers could code. It was brilliant.

https://en.wikipedia.org/wiki/COBOL#History_and_specification

Two of the designers were women, who were apparently more interested
in solving a real problem than they were interested in playing mental
games. Compare Cobol to c or Pascal or APL.


To be fair here, there are very few* operating systems written in
COBOL. It was developed to solve other types of real problems.

I would compare it more to SNOBOL or PL/1.

* but not zero! https://en.wikipedia.org/wiki/BLIS/COBOL

Burroughs wrote their OS and their Algol compilers entirely in Algol.
Apparently a couple of guys read the Algol source code of the first
compiler and hand-compiled that to machine code, as the bootstrap.


--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Fri, 3 Jul 2015 07:29:00 -0700 (PDT), Lasse Langwadt Christensen
<langwadt@fonz.dk> wrote:

Den fredag den 3. juli 2015 kl. 12.52.52 UTC+2 skrev Martin Brown:
If you want to sell your soul for maximum financial gain then
destabilising the global stock trading systems with sophisticated high
frequency trading algorithms is definitely the way to go.

One guy in the UK in his parents bedroom can allegedly do this:

http://www.bbc.co.uk/news/business-32415664

People seem to get upset if you are too good at it!


yeh, you have to be a member of "the money sucking parasite club" to
manipulate prices and steal money like that


-Lasse

A small tax on transactions, like 0.1%, would have a remarkable
damping effect. Maybe we can get that soon, after the next monster
worldwide crash.


--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Fri, 3 Jul 2015 06:57:48 -0700 (PDT), Lasse Langwadt Christensen
<langwadt@fonz.dk> wrote:

Den fredag den 3. juli 2015 kl. 00.03.35 UTC+2 skrev John Larkin:
On Thu, 2 Jul 2015 14:26:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

Den torsdag den 2. juli 2015 kl. 22.12.14 UTC+2 skrev John Larkin:
On Thu, 2 Jul 2015 12:33:39 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

Den torsdag den 2. juli 2015 kl. 21.24.41 UTC+2 skrev John Larkin:
On Thu, 02 Jul 2015 15:18:29 -0400, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 7/2/2015 3:00 PM, John Larkin wrote:
It's a shame that ADA wasn't as widely accepted. Programmers tend to
hate safe languages that make them be careful.

As someone who knows many folks who work in the software industry/game
design, it's kind of cute when EEs talk about things that they're so
very sure about...;-)

How many buffer overrun vulnerabilities has Windows had so far? Round
your answer to the nearest thousand.


and you know for certain that if only they had used ADA everything would
be perfect?

Not perfect, but runtime bounds checking and proper memory and stack
management keep strangers from executing data on your PC. And you
can't get wild pointers if you don't use pointers.


if you make sure your soldering iron never get over 50'C you won't burn you fingers if you grap the wrong end

-Lasse


I once dropped an iron and caught it in mid-air. Only once.

Some people can write bug-free, really solid c. Most programmers
can't. It's easier in embedded systems than in OSs and OS-level apps.

My people really like Python lately, for Windows and Linux engineering
and test software and such, not embedded. As an interpreter, it trades
performance for safety. We can always buy faster CPUs.

up to a point, haven't you been wishing for a faster ltspice?
if is was as easy as buying a faster cpu why don't you?

-Lasse

What I want is a 1000x speed improvement, so I can move sliders and
see waveforms change instantly, just like a breadboard with pots and a
scope. N-dimensional iteration at 5 minutes per trial is not
intuitive, but then 1 minute isn't either.

I did that kilovolt switcher design recently, and it was running 4
minutes per pass, until I removed the leakage inductance and got it
down below 1 minute. Still, I got the design done in a few hours.
Since I'll breadboard it anyhow, I didn't really need Spice.

My new Dell, due soon, will be maybe 5x faster. Spice will then
probably be hard drive limited rather than compute bound. It shows up
at about 50% CPU utilization on my old HP.

Spice on a GPU might make sense for some people, with ramdisk for the
..raw files, but I don't really need that.

Some FPGA compiles are slow, but again probably hard drive limited. Or
bloatware limited.






--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Fri, 03 Jul 2015 10:47:41 -0400, bitrex
<bitrex@de.lete.earthlink.net> wrote:

On 7/3/2015 9:58 AM, Phil Hobbs wrote:

Where that kind of performance isn't required, like, why not use C#? I
don't know that much C#, but from what I've explored it seems like a
breeze and a pleasure. Or when you don't even need a compiled language,
use Python. If you're OK with an interpreted language, you can often
get more accomplished in ten lines of Python than you could in 500 of C++.

I might learn Python this summer if things slow down a bit, but it's
really slow. My son Simon is using the CERN math toolkit (whose name I
forget--Jeroen, are you there?) and says it's powerful like Python but
about 100 times as fast. (He's working on a dark matter detection
experiment with 250 22-inch PMTs and a bunch of FPGAs and stuff.) I
suspect I may be better off improving my Octave skills, which are very
rusty.

You should definitely check out Python. SciPy and Numpy are the
scientific computing and numerical packages for Python respectively -
basically what it ends up is having a nice Python scripting language
interface to a scientific computing backend that's compiled FORTRAN code
which is pretty fast and supports multithreading/multiprocessing.

My signals+systems guy is doing exactly that, Python calling a FORTRAN
(all caps) based library. He had to learn FORTRAN to understand some
of the details.


--

John Larkin Highland Technology, Inc
picosecond timing laser drivers and controllers

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Fri, 03 Jul 2015 09:17:48 -0700, John Larkin
<jlarkin@highlandtechnology.com> Gave us:

I did that kilovolt switcher design recently, and it was running 4
minutes per pass, until I removed the leakage inductance and got it
down below 1 minute. Still, I got the design done in a few hours.
Since I'll breadboard it anyhow, I didn't really need Spice.

Did you post the fixed final design?
 
In article <mn62mc$rt0$1@speranza.aioe.org>, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

On 03/07/2015 02:43, Joe Gwinn wrote:
In article <mn4mc1$o6m$1@dont-email.me>, rickman <gnuarm@gmail.com
wrote:

On 7/2/2015 8:20 PM, Joe Gwinn wrote:
In article <3e2bpap8e70o7mnep2joboecjrdsggpp96@4ax.com>, John Larkin
jlarkin@highlandtechnology.com> wrote:

It's a shame that ADA wasn't as widely accepted. Programmers tend to
hate safe languages that make them be careful.

In the day, I was one of the many programmers who suffered through
Ada83. I made a little career of making Ada run fast enough to be
plausible in realtime applications like radars.

Ada was too big and complicated for its own good. Modula2 wasn't bad
for a minimalist hard realtime language with separate compilation of
modules and provision for low level system operations by design.

Yes. Nobody would have used Ada absent the DoD Mandate.


All the mainframe compilers in that era apart from the most expensive
high end FORTRAN had pretty lousy optimisers.

Generally true. My solution was to use a mix of Fortran and
assembly-coded fortran-callable subroutines and functions.

I have also used fortran in realtime systems, including to code
interrupt routines. This was before fortran developed call stacks,
which led to some very interesting bugs when shared routines were
fought over.


The method was simple but brutal - remove all of Ada that didn't look
exactly like Pascal, and if that wasn't enough, resort to assembly.
This was still Ada enough to qualify as Ada, and to meet the DoD
mandate.

Datapoint: My team implemented what would now be called middleware in
a severe subset of Ada83 plus some assembly code on a DEC VAX to
replace a pure-Ada message communications infrastructure. The subset
Ada plus assembly approach was literally ten times faster than the pure
Ada, and saved the project.

By the time Ada95 came out, it was too late - C/C++ had won.

Ada died because it was designed by academics who had no notion of
realtime, and thus made blunder after blunder, such that not even a DoD
Mandate could save Ada.

I think that is a somewhat jaundiced view of the situation (although I
am not a particular fan of Ada). Algol68 had similar speed complexity
tradeoffs in a previous era. Fortran compiler could run rings around it
performance wise but for algorithm development it was still very useful.

Not really. I suppose the best example is priority inversion, which I
first encountered when I became a embedded realtime programmer in 1974
- it was part of the lore, an entry in the long list of blunders to be
avoided, but had neither a name nor a literature. My boss told me
about it in 1974 while I was learning the ropes.

The developers of Ada did not even know of that lore, never having
built and fielded a non-trivial embedded realtime system, and so fell
into this and other traps.

When the Ada gurus figured out why they were getting these devastating
random big delays, the named the cause and wrote many learned articles.


If you are an Ada expert, I will defer to your judgement. But was the
speed problem with Ada inherent in the language (therefor the fault of
the language designers) or was it just the shortcomings of the
implementations?

It was the design of Ada83 the language, which forced all Ada83
compilers to be clumsy, and to emit slow code. There are many details.

I think you will have to elaborate on that point.

A key example is Rendezvous. As mentioned before, I built a middleware
in Ada83 (plus some assembly) to replace Rendezvous, for a factor of
ten reduction in message-passing latency, and the elimination of forced
synchronism.

The basic interface had four APIs: Get empty message (a block of
memory from a linked list), send message, receive message, release
message for reuse. Message transfer is asynchronous - neither sender
nor receiver wait for the other, unless they want to.

The problem with message facilities having only send-message and
receive-message APIs is that the operating system and/or the
application level code (like the middleware) is forced to copy the
message contents time after time. DEC was very proud that the message
interfaces in VMS only copied the message text seven times going and
another seven times coming, for a total of fourteen times.

Given the amount of data to be sent and received in a radar, and the
speed of the computers of the day, copying the message text is simply
untenable, by at least two and probably three or four orders of
magnitude. Only a shared-memory pass-pointer architectures can work.

Which brings me to the second key example, shared memory. Ada83 simply
had no concept of shared memory and especially of multiprocessor
systems with global memory, despite wide use in embedded realtime
systems of the day.

To build the shared-memory pass-pointer architecture, it was necessary
to implement shared memory between different processes on different
processors. Ada83 simply could not do this, not even after clever
pruning of the language, because the code optimizer did not know and
could not be told that there were other entities that could change
memory.

The solution was to realize that according to the Ada LRM, the
optimizer was required to treat subroutine calls as atomic. So the
shared-memory stuff was done in Ada-callable assembly, where Ada could
not see, allowing us to keep Ada in ignorance of things she could not
understand.

The part about "because the code optimizer did not know and could not
be told that there were other entities that could change memory" also
crippled hardware control by reading and writing memory-mapped control
registers. Again, assembly-coded Ada-callable subroutines to the
rescue.


One obvious question is why we didn't use C, versus assembly, to
implement the shared-memory functions, given that we had DEC C. The
problem was political: The Ada zealots were afraid of C (their main
competitor), but not of assembly. The added effort of coding in
assembly was still less than the added effort of fighting the zealots
off if we used C. Although it would have been interesting to see how
the zealots dealt with a 100:1 to 1000:1 performance problem.


Offhand I can't see
anything in the Ada language specification apart from its immense size
and the enormous validation suite that forced Ada compilers to be clumsy
and emit slow code. I will concede that in the era where C was in the
ascendency a great deal more effort was expended on its code optimiser
than on any other language apart from Fortran.

The DoD had a deleterious role here. Their insistence that the entire
language be implemented from the start, without being able to start
small and grow, forced both compiler and validation suite to be huge,
and prevented any serious efforts to optimize the generated code.
Said another way, the world's supply of Ada compiler gurus was absorbed
with getting obscure parts of Ada and the suite to work. It was many
years before compilers good enough to be plausible emerged. It would
have been far better to start with the core 10%, and to allow changes
to the LRM as experience accumulated.

Data point: Back in the day, I compared the size in bytes of the DEC C
compiler and the DEC Ada83 compiler, both of which were well respected.
The Ada83 compiler was ten times the size of the C compiler.
Complexity is not linear in size, it's more like quadratic.


What could have changed in Ada95 that would speed up
implementations?

Fixing the many blunders of Ada83 was a large part of it. (There was
lots of field experience by then.) Adding true multiprocessor support
and the ability to control hardware was another. Adding support for
object-oriented programming (borrowed from C++). Etc.

All successful programming languages so far have had the following
things in common: Developed by a handful of people who wrote the
compiler at the same time. Not a committee writing a 300-page language
description in a vacuum. The early language and compilers co-evolved
as real users (typically the colleagues of the handful) used the
language for real work, and brought the problems back to the handful,
who fixed the problem by changing the language if needed., for a few
full versions. Full darwinist selection in the marketplace beyond the
handful and their colleagues - hundreds of languages qualify for the
common characteristics named before, but only a few languages achieve
more than a few hundred users. After such a language has succeeded in
the marketplace for some time, and has matured, it is formally
standardized (and thus frozen).

I don't disagree that most of the powerful languages started out as
someone's pet project that gained momentum locally and then globally.

Ada83 violated all of the above.


I know C has gotten faster over the years to the point
where it is hard to write hand optimized assembly that is faster. At
least that is what I read. I don't use C much these days and I can't
remember the last time I wrote assembly other than for stack machines
with a very simple assembly language.

Assembly language is always faster than compiled language, if competent
assembly programmers are used. And assembly can do things that are
very difficult in any compiled language, with direct hardware control
being a traditional area. But the cost is about four times that of
compiled C, so assembly is used sparingly. The UNIX kernel was about
4% assembly.

That isn't true any more with profile directed optimising compilers. The
compiler can often beat all but the very best handpicked assembler
practitioners. Register colouring, speculative execution and pipeline
stalls mean that most people cannot see the wood for the trees now. The
compiler (with the right CPU optimising flags set) does it seemlessly.

I've been hearing various forms of this claim about the then latest
compilers for the last 45 years, and it has never been true, for a very
simple reason: No compiler understands intent, or will tell the
programmer that if they only redesigned their entire program around a
specific special instruction, a 100:1 speedup would be possible.

My favorite special (SEL 32/55 and I believe IBM 360) instruction is
Execute Remote Indexed, which was used to implement a 3-dimensional
lookup table of actions to take when a specific button on a specific
control panel of a specific box was operated. The actions were single
assembly instructions that could be anything from changing a date value
to calling a subroutine. There were at least 20,000 selectable
actions.

Coming back to optimizing compilers, the real case for them is
economic: While any good assembly programmer can beat the compiler, it
is not usually worthwhile to do so, because computers have gotten fast
enough and cheap enough that shaving 10% off simply isn't worth it.


There is also a nasty trick: write the code in C, and look at the
resulting generated assembly code. If it's worse than what one can
write manually, paraphrase the C code and try again. Eventually, the
optimum paraphrase will be found. In practice, C compilers are
sufficiently alike that a good paraphrase works on most compilers.

A surprising number of modern compilers will reduce common loop
constructs to almost identical canonical code. Certain constructs are
faster on some Intel CPUs than on others so it chooses the right one.

I use mostly VHDL for hardware design which is very Ada like and Forth
for apps which is like a stack machine assembly language. A bit of a
strange dichotomy, but there is no Forth-like HDL.

VHDL was expressly based on Ada, and Verilog on C.

FORTH is a pure stack language, most resembling a HP calculator (with
RPN), and was developed for machine control by an astronomer. The
machines being controlled are such as large telescopes.

I was very interested in FORTH when it went public and they were being
very close-mouthed. I ended up developing my own prototype language,
which I called Fifth. The interpreter core was written in assembly, as
that was how to get the necessary speed. But I found Fifth to be too
limiting for what I was then doing, and never pursued it.

I don't remember it being all that closed mouthed. It was in relatively
wide use in the late 1970's for telescope control and I thought the
magnetic tapes were circulating fairly freely in that community.

There was even a Forth for the venerable BBC micro in about 1983.

By 1983, the FORTH folk had gotten over it, but in 1972 or so, when it
first came out, they would say very little about how it worked, and
they wanted $10,000 for access. In those days, a Volvo sedan was about
$3,000.

I asked for a list of ten happy users to talk to, and started calling.
Only the users that had cracked the kernel were able to program in
FORTH, so, I got an octal dump of the PDP-11 FORTH kernel from one
happy user, and reverse engineered the kernel. Then I knew why they
were so close-mouthed - it was far too small and too simple to justify
such a price. So, I wrote Fifth.

Joe Gwinn
 
On 7/3/2015 12:21 PM, John Larkin wrote:
On Fri, 03 Jul 2015 10:47:41 -0400, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 7/3/2015 9:58 AM, Phil Hobbs wrote:

Where that kind of performance isn't required, like, why not use C#? I
don't know that much C#, but from what I've explored it seems like a
breeze and a pleasure. Or when you don't even need a compiled language,
use Python. If you're OK with an interpreted language, you can often
get more accomplished in ten lines of Python than you could in 500 of C++.

I might learn Python this summer if things slow down a bit, but it's
really slow. My son Simon is using the CERN math toolkit (whose name I
forget--Jeroen, are you there?) and says it's powerful like Python but
about 100 times as fast. (He's working on a dark matter detection
experiment with 250 22-inch PMTs and a bunch of FPGAs and stuff.) I
suspect I may be better off improving my Octave skills, which are very
rusty.

You should definitely check out Python. SciPy and Numpy are the
scientific computing and numerical packages for Python respectively -
basically what it ends up is having a nice Python scripting language
interface to a scientific computing backend that's compiled FORTRAN code
which is pretty fast and supports multithreading/multiprocessing.

My signals+systems guy is doing exactly that, Python calling a FORTRAN
(all caps) based library. He had to learn FORTRAN to understand some
of the details.

Yeah, it's a really nice setup. It's the way much software these days
_should_ be engineered I think, and it's the way big name title video
games for consoles and PC have been engineered for a long time.

The core engine/artificial intelligence/graphics is written by the rock
star coders in C++ or some high-performance language, and then the game
mechanics and story and other elements that don't require blazing speed
are scripted in some interpreted language like Python or Lua something
to stitch together the high-performance set pieces.

It makes rapid modification of the game in response to playtester
feedback really easy, and then these modifications can be performed
quite well by ordinary humans that don't need to be ultra hotshot
coders; performance in that part at least is sacrificed for ease of
maintenance and easy modding.
 

Welcome to EDABoard.com

Sponsor

Back
Top