M
maps
Guest
@ et oui cal moe
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
You can file for a US patent up to a year after having divulged the
idea.
That grace period does not apply to foreign filing. There you lose the
right to file immediately after divulging. So, foreign filing is more
demanding, not less.
As far as equations vs LUTs, I think it makes no difference. But
equations may be more widely understood. BTW, the OP is confusing in
the example, using a logic equation that is actually AND and OR...
Peter Alfke (with about 30 patents, but all filed by company patent
lawyers)
You should be aware that 'Clarity' and 'Patent' are often mutuallyHi,
I am writing a patent application for FPGA and have no prior
experiences with patent writing.
I found that in Xilinx patents, all lookup table equations are
described in AND/OR/Multiplexer circuits in its claims. Describing a
logic connection for a lookup table in claims is much more complex in
English than presenting an equivalent logic equation.
For example, a lookup table has the equation:
Out <= (A*B) + (C*D);
It is much more concise and simpler than describing the circuit in
AND/OR gate circuits.
Do you have experiences with and any advices on writing an equivalent
logic equation in a patent claim field ?
Weng Tianxiang wrote:
Hi,
I am writing a patent application for FPGA and have no prior
experiences with patent writing.
I found that in Xilinx patents, all lookup table equations are
described in AND/OR/Multiplexer circuits in its claims. Describing a
logic connection for a lookup table in claims is much more complex in
English than presenting an equivalent logic equation.
For example, a lookup table has the equation:
Out <= (A*B) + (C*D);
It is much more concise and simpler than describing the circuit in
AND/OR gate circuits.
Do you have experiences with and any advices on writing an equivalent
logic equation in a patent claim field ?
You should be aware that 'Clarity' and 'Patent' are often mutually
exclusive
Patent lawyers have motivation to obfuscate, for many reasons.
Patents are merely a license to litigate, (and an income stream for the
lawyer) so they tend to break them into many small claims, that can be
argued.
If there is prior art, it also helps to sound a lot different, even if
you are the same.
This also helps to get over the first hurdle, of Patent examiner.
Most (all?) FPGA patents will be electronic searchable, so scan those
yourself, and then "work your claim into the gaps" between those patents.
-jg
If you have an audience for such artwork,I am a mature student will be doing some complex VHDL and Verilog design
work for my course. As well as having to create and test the
functionality of the design (in both languages) I want to document how
the design is put together and it's complex hierarchy.
Is the OP not getting pixels in raster-scan order though?JPEGs are lossy because of the quantization step. You can do it without
the quantization step and still notice a significant compression. If
you preload your quantization constants and Huffman codes into lookup
tables, you can easily process one pixel per clock cycle in a 1500 gate
FPGA. I wrote a fully piplelined version that queued up the first eight
rows of an incoming image into block ram before starting on the DCTs.
It worked great. Ideally you would do DCTs on blocks larger than 8x8,
but the advantage to 8x8 is that you can easily do 64 8bit operations
in parallel which is nice for the Z-ordering, etc. Bigger squares
require bigger chips and external memory, and as soon as you have to go
to external memory you lose your pipelining.
The output bit rate will vary, but can be bounded -- for an obviousGiven that a lossless system is inevitably 'variable bit rate'
(VBR) the concept of "real time capability" is somewhat vague;
the latency is bound to be variable.
JPEG supports lossless encoding that can fit (at least roughly) withinHello community,
I am thinking about implementing a real-time compression scheme on an FPGA
working at about 500 Mhz. Facing the fact that there is no "universal
compression" algorithm that can compress data regardless of its structure
and statistics I assume compressing grayscale image data. The image data is
delivered line-wise, meaning that one horizontal line is processed, than
the next one, a.s.o.
Because of the high data rate I cannot spend much time on DFT or DCT and on
data modelling. What I am looking for is a way to compress the pixel data in
spatial not spectral domain because of latency aspects, processing
complexity, etc. Because of the sequential data transmission line by line a
block matching is also not possible in my opinion. The compression ratio is
not so important, factor 2:1 would be sufficient. What really matters is the
real time capability. The algorithm should be pipelineable and fast. The
memory requirements should not exceed 1 kb.
What "standard" compression schemes would you recommend?
Yes, almost certainly. Lossless JPEG is open to considerableAre there
potentialities for a non-standard "own solution"?
Though it's only rarely used, there's a lossless version of JPEGHello community,
I am thinking about implementing a real-time compression scheme on an FPGA
working at about 500 Mhz. Facing the fact that there is no "universal
compression" algorithm that can compress data regardless of its structure
and statistics I assume compressing grayscale image data. The image data is
delivered line-wise, meaning that one horizontal line is processed, than
the next one, a.s.o.
Because of the high data rate I cannot spend much time on DFT or DCT and on
data modelling. What I am looking for is a way to compress the pixel data in
spatial not spectral domain because of latency aspects, processing
complexity, etc. Because of the sequential data transmission line by line a
block matching is also not possible in my opinion. The compression ratio is
not so important, factor 2:1 would be sufficient. What really matters is the
real time capability. The algorithm should be pipelineable and fast. The
memory requirements should not exceed 1 kb.
What "standard" compression schemes would you recommend?
Yes. In the JPEG 2000 standard, they added JPEG LS, which is anotherAre there potentialities for a non-standard "own solution"?
Melanie Nasic wrote:
Hello community,
I am thinking about implementing a real-time compression scheme on an
FPGA
working at about 500 Mhz. Facing the fact that there is no "universal
compression" algorithm that can compress data regardless of its structure
and statistics I assume compressing grayscale image data. The image data
is
delivered line-wise, meaning that one horizontal line is processed, than
the next one, a.s.o.
Because of the high data rate I cannot spend much time on DFT or DCT and
on
data modelling. What I am looking for is a way to compress the pixel data
in
spatial not spectral domain because of latency aspects, processing
complexity, etc. Because of the sequential data transmission line by line
a
block matching is also not possible in my opinion. The compression ratio
is
not so important, factor 2:1 would be sufficient. What really matters is
the
real time capability. The algorithm should be pipelineable and fast. The
memory requirements should not exceed 1 kb.
What "standard" compression schemes would you recommend?
JPEG supports lossless encoding that can fit (at least roughly) within
the constraints you've imposed. It uses linear prediction of the
current pixel based on one or more previous pixels. The difference
between the prediction and the actual value is what's then encoded. The
difference is encoded in two parts: the number of bits needed for the
difference and the difference itself. The number of bits is Huffman
encoded, but the remainder is not.
This has a number of advantages. First and foremost, it can be done
based on only the curent scan line or (depending on the predictor you
choose) only one scan line plus one pixel. In the latter case, you need
to (minutely) modify the model you've outlined though -- instead of
reading, compressing, and discarding an entire scan line, then starting
the next, you always retain one scan line worth of data. As you process
pixel X of scan line Y, you're storing pixels 0 through X+1 of the
current scan line plus pixels X-1 through N (=line width) of the
previous scan line.
Another nice point is that the math involved is always simple -- the
most complex case is one addition, one subtraction and a one-bit right
shift.
Are there
potentialities for a non-standard "own solution"?
Yes, almost certainly. Lossless JPEG is open to considerable
improvement. Just for an obvious example, it's pretty easy to predict
the current pixel based on five neighboring pixels instead of three. At
least in theory, this should improve prediction accuracy by close to
40% -- thus reducing the number of bits needed to encode the difference
between the predicted and actual values. At a guess, you won't really
see 40% improvement, but you'll still see a little improvement.
In the JPEG 2000 standard, they added JPEG LS, which is certainly an
improvement, but if memory serves, it requires storing roughly two full
scan lines instead of roughly one scan line. OTOH, it would be pretty
easy to steal some of the ideas in JPEG LS without using the parts that
require more storage -- some things like its handling of runs are
mostly a matter of encoding that shouldn't really require much extra
storage.
The final question, however, is whether any of these is likely to give
you 2:1 compression. That'll depend in your input data -- for typical
photographs, I doubt that'll happen most of the time. For thngs like
line art, faxes, etc., you can probably do quite a bit better than 2:1
on a fairly regular basis. If you're willing to settle for nearl
lossless compression, you can improve ratios a bit further.
--
Later,
Jerry.
Sorry 'bout that -- Google claimed the first one hadn't posted, so IHi Jerry,
thanks for your response(s).
I don't believe I've seen any VHDL code for it. One place that has CSounds quite promising. Do you know something
about hardware implementation of the compression schemes you propose? Are
there already VHDL examples available or at least C reference models?
"ma" <ma@nowhere.com> schrieb im Newsbeitrag
news:LcBvf.86365$PD2.51133@fe1.news.blueyonder.co.uk...
Hello,
I have a Veritex-4 PCI board and I like to program the PowerPC on it. I
don't have the EDK from Xilinx. Here are my questions:
How can program the PowerPC without buying EDK?
short answer: you can not
long answer: you can if write your own minimal replacement fo EDK
As I know the compiler and linker is free (part of GNU) where can I get
them for free?
ppc gcc can be obtained but it want help you much, see above
How can I download the compiled program to PowerPC?
over JTAG or buy preloading BRAMs
How can I get the output? For example if I write a hello world type of
program, can I see the STDIO on screen?
use EDK or add your peripherals and re-implemented all the funtionality
provide by EDK
Any help is much appreciated.
doing it wihtout EDK costs you WAY more than than obtaining EDK, it could
be done, but the time needed for that just isnt worht doing it
sorry, but Xilinx REALLY REALLY doesnt want anyone to work on the Virtex
PPC without using EDK, it is doable (without EDK) but it really isnt worht
trying
Antti
I love this quote: "It's always been my dream to give my customers aFrank,
See:
http://www.deepchip.com/posts/0184.html
HTH
Ajeetha
www.noveldv.com
That is the case. See below.Modelsim's implementation of "real" is basically double-precision
floating-point but shorn of its meta-values.